url
stringlengths 23
7.17k
| text
stringlengths 0
1.65M
|
---|---|
https://huggingface.co/nikitawriter | 1 1
Nikita Osadchiy
nikitawriter
https://writer.com
osadchiyn
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/starlingfeather | 2
Ambar
starlingfeather
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/TorpedoFrank | 1
Alexander
TorpedoFrank
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/ZaneRavenholt | Zane Ravenholt
ZaneRavenholt
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/kirkg | 2 1
kirk goddard
kirkg
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/heathexer | 1
Heath Robinson
heathexer
heathexer
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/heathexer-writer | 1
Heath Robinson
heathexer-writer
heathexer
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/mark-saad | 1
Mark Saad
mark-saad
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/andrewracine | 1 1
Andrew Racine
andrewracine
AndrewRacine
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/dsweenor | 1 1
David Sweenor
dsweenor
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/hussamo | 1
otri
hussamo
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/kirill-writer | 1
Kyrylo Buha
kirill-writer
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/chrisbryant | 2
Chris Bryant
chrisbryant
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/Juliene | 1
Santos
Juliene
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/kevin-w | Kevin Wei
kevin-w
Research interests
None yet
Organizations
models
None public yet
datasets
None public yet |
https://huggingface.co/docs/transformers/quicktour | Quick tour
Get up and running with 🤗 Transformers! Whether you’re a developer or an everyday user, this quick tour will help you get started and show you how to use the pipeline() for inference, load a pretrained model and preprocessor with an AutoClass, and quickly train a model with PyTorch or TensorFlow. If you’re a beginner, we recommend checking out our tutorials or course next for more in-depth explanations of the concepts introduced here.
Before you begin, make sure you have all the necessary libraries installed:
!pip install transformers datasets
You’ll also need to install your preferred machine learning framework:
Pytorch
Hide Pytorch content
TensorFlow
Hide TensorFlow content
Pipeline
The pipeline() is the easiest and fastest way to use a pretrained model for inference. You can use the pipeline() out-of-the-box for many tasks across different modalities, some of which are shown in the table below:
For a complete list of available tasks, check out the pipeline API reference.
Task Description Modality Pipeline identifier
Text classification assign a label to a given sequence of text NLP pipeline(task=“sentiment-analysis”)
Text generation generate text given a prompt NLP pipeline(task=“text-generation”)
Summarization generate a summary of a sequence of text or document NLP pipeline(task=“summarization”)
Image classification assign a label to an image Computer vision pipeline(task=“image-classification”)
Image segmentation assign a label to each individual pixel of an image (supports semantic, panoptic, and instance segmentation) Computer vision pipeline(task=“image-segmentation”)
Object detection predict the bounding boxes and classes of objects in an image Computer vision pipeline(task=“object-detection”)
Audio classification assign a label to some audio data Audio pipeline(task=“audio-classification”)
Automatic speech recognition transcribe speech into text Audio pipeline(task=“automatic-speech-recognition”)
Visual question answering answer a question about the image, given an image and a question Multimodal pipeline(task=“vqa”)
Document question answering answer a question about the document, given a document and a question Multimodal pipeline(task=“document-question-answering”)
Image captioning generate a caption for a given image Multimodal pipeline(task=“image-to-text”)
Start by creating an instance of pipeline() and specifying a task you want to use it for. In this guide, you’ll use the pipeline() for sentiment analysis as an example:
>>> from transformers import pipeline
>>> classifier = pipeline("sentiment-analysis")
The pipeline() downloads and caches a default pretrained model and tokenizer for sentiment analysis. Now you can use the classifier on your target text:
>>> classifier("We are very happy to show you the 🤗 Transformers library.")
[{'label': 'POSITIVE', 'score': 0.9998}]
If you have more than one input, pass your inputs as a list to the pipeline() to return a list of dictionaries:
>>> results = classifier(["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."])
>>> for result in results:
... print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
label: POSITIVE, with score: 0.9998
label: NEGATIVE, with score: 0.5309
The pipeline() can also iterate over an entire dataset for any task you like. For this example, let’s choose automatic speech recognition as our task:
>>> import torch
>>> from transformers import pipeline
>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h")
Load an audio dataset (see the 🤗 Datasets Quick Start for more details) you’d like to iterate over. For example, load the MInDS-14 dataset:
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
You need to make sure the sampling rate of the dataset matches the sampling rate facebook/wav2vec2-base-960h was trained on:
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))
The audio files are automatically loaded and resampled when calling the "audio" column. Extract the raw waveform arrays from the first 4 samples and pass it as a list to the pipeline:
>>> result = speech_recognizer(dataset[:4]["audio"])
>>> print([d["text"] for d in result])
['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT']
For larger datasets where the inputs are big (like in speech or vision), you’ll want to pass a generator instead of a list to load all the inputs in memory. Take a look at the pipeline API reference for more information.
Use another model and tokenizer in the pipeline
The pipeline() can accommodate any model from the Hub, making it easy to adapt the pipeline() for other use-cases. For example, if you’d like a model capable of handling French text, use the tags on the Hub to filter for an appropriate model. The top filtered result returns a multilingual BERT model finetuned for sentiment analysis you can use for French text:
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
Pytorch
Hide Pytorch content
Use AutoModelForSequenceClassification and AutoTokenizer to load the pretrained model and it’s associated tokenizer (more on an AutoClass in the next section):
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
TensorFlow
Hide TensorFlow content
Use TFAutoModelForSequenceClassification and AutoTokenizer to load the pretrained model and it’s associated tokenizer (more on an TFAutoClass in the next section):
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
Specify the model and tokenizer in the pipeline(), and now you can apply the classifier on French text:
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes très heureux de vous présenter la bibliothèque 🤗 Transformers.")
[{'label': '5 stars', 'score': 0.7273}]
If you can’t find a model for your use-case, you’ll need to finetune a pretrained model on your data. Take a look at our finetuning tutorial to learn how. Finally, after you’ve finetuned your pretrained model, please consider sharing the model with the community on the Hub to democratize machine learning for everyone! 🤗
AutoClass
Under the hood, the AutoModelForSequenceClassification and AutoTokenizer classes work together to power the pipeline() you used above. An AutoClass is a shortcut that automatically retrieves the architecture of a pretrained model from its name or path. You only need to select the appropriate AutoClass for your task and it’s associated preprocessing class.
Let’s return to the example from the previous section and see how you can use the AutoClass to replicate the results of the pipeline().
AutoTokenizer
A tokenizer is responsible for preprocessing text into an array of numbers as inputs to a model. There are multiple rules that govern the tokenization process, including how to split a word and at what level words should be split (learn more about tokenization in the tokenizer summary). The most important thing to remember is you need to instantiate a tokenizer with the same model name to ensure you’re using the same tokenization rules a model was pretrained with.
Load a tokenizer with AutoTokenizer:
>>> from transformers import AutoTokenizer
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
Pass your text to the tokenizer:
>>> encoding = tokenizer("We are very happy to show you the 🤗 Transformers library.")
>>> print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
The tokenizer returns a dictionary containing:
input_ids: numerical representations of your tokens.
attention_mask: indicates which tokens should be attended to.
A tokenizer can also accept a list of inputs, and pad and truncate the text to return a batch with uniform length:
Pytorch
Hide Pytorch content
>>> pt_batch = tokenizer(
... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="pt",
... )
TensorFlow
Hide TensorFlow content
>>> tf_batch = tokenizer(
... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="tf",
... )
Check out the preprocess tutorial for more details about tokenization, and how to use an AutoImageProcessor, AutoFeatureExtractor and AutoProcessor to preprocess image, audio, and multimodal inputs.
AutoModel
Pytorch
Hide Pytorch content
🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an AutoModel like you would load an AutoTokenizer. The only difference is selecting the correct AutoModel for the task. For text (or sequence) classification, you should load AutoModelForSequenceClassification:
>>> from transformers import AutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
See the task summary for tasks supported by an AutoModel class.
Now pass your preprocessed batch of inputs directly to the model. You just have to unpack the dictionary by adding **:
>>> pt_outputs = pt_model(**pt_batch)
The model outputs the final activations in the logits attribute. Apply the softmax function to the logits to retrieve the probabilities:
>>> from torch import nn
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> print(pt_predictions)
tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],
[0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)
TensorFlow
Hide TensorFlow content
🤗 Transformers provides a simple and unified way to load pretrained instances. This means you can load an TFAutoModel like you would load an AutoTokenizer. The only difference is selecting the correct TFAutoModel for the task. For text (or sequence) classification, you should load TFAutoModelForSequenceClassification:
>>> from transformers import TFAutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
See the task summary for tasks supported by an AutoModel class.
Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is:
>>> tf_outputs = tf_model(tf_batch)
The model outputs the final activations in the logits attribute. Apply the softmax function to the logits to retrieve the probabilities:
>>> import tensorflow as tf
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions
All 🤗 Transformers models (PyTorch or TensorFlow) output the tensors before the final activation function (like softmax) because the final activation function is often fused with the loss. Model outputs are special dataclasses so their attributes are autocompleted in an IDE. The model outputs behave like a tuple or a dictionary (you can index with an integer, a slice or a string) in which case, attributes that are None are ignored.
Save a model
Pytorch
Hide Pytorch content
Once your model is fine-tuned, you can save it with its tokenizer using PreTrainedModel.save_pretrained():
>>> pt_save_directory = "./pt_save_pretrained"
>>> tokenizer.save_pretrained(pt_save_directory)
>>> pt_model.save_pretrained(pt_save_directory)
When you are ready to use the model again, reload it with PreTrainedModel.from_pretrained():
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
TensorFlow
Hide TensorFlow content
Once your model is fine-tuned, you can save it with its tokenizer using TFPreTrainedModel.save_pretrained():
>>> tf_save_directory = "./tf_save_pretrained"
>>> tokenizer.save_pretrained(tf_save_directory)
>>> tf_model.save_pretrained(tf_save_directory)
When you are ready to use the model again, reload it with TFPreTrainedModel.from_pretrained():
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
One particularly cool 🤗 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The from_pt or from_tf parameter can convert the model from one framework to the other:
Pytorch
Hide Pytorch content
>>> from transformers import AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)
TensorFlow
Hide TensorFlow content
>>> from transformers import TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
Custom model builds
You can modify the model’s configuration class to change how a model is built. The configuration specifies a model’s attributes, such as the number of hidden layers or attention heads. You start from scratch when you initialize a model from a custom configuration class. The model attributes are randomly initialized, and you’ll need to train the model before you can use it to get meaningful results.
Start by importing AutoConfig, and then load the pretrained model you want to modify. Within AutoConfig.from_pretrained(), you can specify the attribute you want to change, such as the number of attention heads:
>>> from transformers import AutoConfig
>>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12)
Pytorch
Hide Pytorch content
Create a model from your custom configuration with AutoModel.from_config():
>>> from transformers import AutoModel
>>> my_model = AutoModel.from_config(my_config)
TensorFlow
Hide TensorFlow content
Create a model from your custom configuration with TFAutoModel.from_config():
>>> from transformers import TFAutoModel
>>> my_model = TFAutoModel.from_config(my_config)
Take a look at the Create a custom architecture guide for more information about building custom configurations.
Trainer - a PyTorch optimized training loop
All models are a standard torch.nn.Module so you can use them in any typical training loop. While you can write your own training loop, 🤗 Transformers provides a Trainer class for PyTorch, which contains the basic training loop and adds additional functionality for features like distributed training, mixed precision, and more.
Depending on your task, you’ll typically pass the following parameters to Trainer:
You’ll start with a PreTrainedModel or a torch.nn.Module:
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
TrainingArguments contains the model hyperparameters you can change like learning rate, batch size, and the number of epochs to train for. The default values are used if you don’t specify any training arguments:
>>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(
... output_dir="path/to/save/folder/",
... learning_rate=2e-5,
... per_device_train_batch_size=8,
... per_device_eval_batch_size=8,
... num_train_epochs=2,
... )
Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
Load a dataset:
>>> from datasets import load_dataset
>>> dataset = load_dataset("rotten_tomatoes")
Create a function to tokenize the dataset:
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"])
Then apply it over the entire dataset with map:
>>> dataset = dataset.map(tokenize_dataset, batched=True)
A DataCollatorWithPadding to create a batch of examples from your dataset:
>>> from transformers import DataCollatorWithPadding
>>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
Now gather all these classes in Trainer:
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=dataset["train"],
... eval_dataset=dataset["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... )
When you’re ready, call train() to start training:
For tasks - like translation or summarization - that use a sequence-to-sequence model, use the Seq2SeqTrainer and Seq2SeqTrainingArguments classes instead.
You can customize the training loop behavior by subclassing the methods inside Trainer. This allows you to customize features such as the loss function, optimizer, and scheduler. Take a look at the Trainer reference for which methods can be subclassed.
The other way to customize the training loop is by using Callbacks. You can use callbacks to integrate with other libraries and inspect the training loop to report on progress or stop the training early. Callbacks do not modify anything in the training loop itself. To customize something like the loss function, you need to subclass the Trainer instead.
Train with TensorFlow
All models are a standard tf.keras.Model so they can be trained in TensorFlow with the Keras API. 🤗 Transformers provides the prepare_tf_dataset() method to easily load your dataset as a tf.data.Dataset so you can start training right away with Keras’ compile and fit methods.
You’ll start with a TFPreTrainedModel or a tf.keras.Model:
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
Create a function to tokenize the dataset:
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"])
Apply the tokenizer over the entire dataset with map and then pass the dataset and tokenizer to prepare_tf_dataset(). You can also change the batch size and shuffle the dataset here if you’d like:
>>> dataset = dataset.map(tokenize_dataset)
>>> tf_dataset = model.prepare_tf_dataset(
... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer
... )
When you’re ready, you can call compile and fit to start training. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:
>>> from tensorflow.keras.optimizers import Adam
>>> model.compile(optimizer=Adam(3e-5))
>>> model.fit(tf_dataset)
What's next?
Now that you’ve completed the 🤗 Transformers quick tour, check out our guides and learn how to do more specific things like writing a custom model, fine-tuning a model for a task, and how to train a model with a script. If you’re interested in learning more about 🤗 Transformers core concepts, grab a cup of coffee and take a look at our Conceptual Guides! |
https://huggingface.co/docs/transformers/training | Train a TensorFlow model with Keras
You can also train 🤗 Transformers models in TensorFlow with the Keras API!
Loading data for Keras
When you want to train a 🤗 Transformers model with the Keras API, you need to convert your dataset to a format that Keras understands. If your dataset is small, you can just convert the whole thing to NumPy arrays and pass it to Keras. Let’s try that first before we do anything more complicated.
First, load a dataset. We’ll use the CoLA dataset from the GLUE benchmark, since it’s a simple binary text classification task, and just take the training split for now.
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"]
Next, load a tokenizer and tokenize the data as NumPy arrays. Note that the labels are already a list of 0 and 1s, so we can just convert that directly to a NumPy array without tokenization!
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
tokenized_data = dict(tokenized_data)
labels = np.array(dataset["label"])
Finally, load, compile, and fit the model. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to:
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
model.compile(optimizer=Adam(3e-5))
model.fit(tokenized_data, labels)
You don’t have to pass a loss argument to your models when you compile() them! Hugging Face models automatically choose a loss that is appropriate for their task and model architecture if this argument is left blank. You can always override this by specifying a loss yourself if you want to!
This approach works great for smaller datasets, but for larger datasets, you might find it starts to become a problem. Why? Because the tokenized array and labels would have to be fully loaded into memory, and because NumPy doesn’t handle “jagged” arrays, so every tokenized sample would have to be padded to the length of the longest sample in the whole dataset. That’s going to make your array even bigger, and all those padding tokens will slow down training too!
Loading data as a tf.data.Dataset
If you want to avoid slowing down training, you can load your data as a tf.data.Dataset instead. Although you can write your own tf.data pipeline if you want, we have two convenience methods for doing this:
prepare_tf_dataset(): This is the method we recommend in most cases. Because it is a method on your model, it can inspect the model to automatically figure out which columns are usable as model inputs, and discard the others to make a simpler, more performant dataset.
to_tf_dataset: This method is more low-level, and is useful when you want to exactly control how your dataset is created, by specifying exactly which columns and label_cols to include.
Before you can use prepare_tf_dataset(), you will need to add the tokenizer outputs to your dataset as columns, as shown in the following code sample:
def tokenize_dataset(data):
return tokenizer(data["text"])
dataset = dataset.map(tokenize_dataset)
Remember that Hugging Face datasets are stored on disk by default, so this will not inflate your memory usage! Once the columns have been added, you can stream batches from the dataset and add padding to each batch, which greatly reduces the number of padding tokens compared to padding the entire dataset.
>>> tf_dataset = model.prepare_tf_dataset(dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer)
Note that in the code sample above, you need to pass the tokenizer to prepare_tf_dataset so it can correctly pad batches as they’re loaded. If all the samples in your dataset are the same length and no padding is necessary, you can skip this argument. If you need to do something more complex than just padding samples (e.g. corrupting tokens for masked language modelling), you can use the collate_fn argument instead to pass a function that will be called to transform the list of samples into a batch and apply any preprocessing you want. See our examples or notebooks to see this approach in action.
Once you’ve created a tf.data.Dataset, you can compile and fit the model as before:
model.compile(optimizer=Adam(3e-5))
model.fit(tf_dataset) |
https://huggingface.co/docs/transformers/preprocessing | Preprocess
Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. 🤗 Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you’ll learn that for:
Text, use a Tokenizer to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors.
Speech and audio, use a Feature extractor to extract sequential features from audio waveforms and convert them into tensors.
Image inputs use a ImageProcessor to convert images into tensors.
Multimodal inputs, use a Processor to combine a tokenizer and a feature extractor or image processor.
AutoProcessor always works and automatically chooses the correct class for the model you’re using, whether you’re using a tokenizer, image processor, feature extractor or processor.
Before you begin, install 🤗 Datasets so you can load some datasets to experiment with:
Natural Language Processing
The main tool for preprocessing textual data is a tokenizer. A tokenizer splits text into tokens according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer.
If you plan on using a pretrained model, it’s important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the vocab) during pretraining.
Get started by loading a pretrained tokenizer with the AutoTokenizer.from_pretrained() method. This downloads the vocab a model was pretrained with:
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
Then pass your text to the tokenizer:
>>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
>>> print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
The tokenizer returns a dictionary with three important items:
input_ids are the indices corresponding to each token in the sentence.
attention_mask indicates whether a token should be attended to or not.
token_type_ids identifies which sequence a token belongs to when there is more than one sequence.
Return your input by decoding the input_ids:
>>> tokenizer.decode(encoded_input["input_ids"])
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]'
As you can see, the tokenizer added two special tokens - CLS and SEP (classifier and separator) - to the sentence. Not all models need special tokens, but if they do, the tokenizer automatically adds them for you.
If there are several sentences you want to preprocess, pass them as a list to the tokenizer:
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_inputs = tokenizer(batch_sentences)
>>> print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]}
Pad
Sentences aren’t always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Set the padding parameter to True to pad the shorter sequences in the batch to match the longest sequence:
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
The first and third sentences are now padded with 0’s because they are shorter.
Truncation
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you’ll need to truncate the sequence to a shorter length.
Set the truncation parameter to True to truncate a sequence to the maximum length accepted by the model:
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
Check out the Padding and truncation concept guide to learn more different padding and truncation arguments.
Build tensors
Finally, you want the tokenizer to return the actual tensors that get fed to the model.
Set the return_tensors parameter to either pt for PyTorch, or tf for TensorFlow:
Pytorch
Hide Pytorch content
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
>>> print(encoded_input)
{'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])}
TensorFlow
Hide TensorFlow content
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
>>> print(encoded_input)
{'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>}
Audio
For audio tasks, you’ll need a feature extractor to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors.
Load the MInDS-14 dataset (see the 🤗 Datasets tutorial for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets:
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
Access the first element of the audio column to take a look at the input. Calling the audio column automatically loads and resamples the audio file:
>>> dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
This returns three items:
array is the speech signal loaded - and potentially resampled - as a 1D array.
path points to the location of the audio file.
sampling_rate refers to how many data points in the speech signal are measured per second.
For this tutorial, you’ll use the Wav2Vec2 model. Take a look at the model card, and you’ll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data’s sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data’s sampling rate isn’t the same, then you need to resample your data.
Use 🤗 Datasets’ cast_column method to upsample the sampling rate to 16kHz:
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
Call the audio column again to resample the audio file:
>>> dataset[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ...,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
Next, load a feature extractor to normalize and pad the input. When padding textual data, a 0 is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a 0 - interpreted as silence - to array.
Load the feature extractor with AutoFeatureExtractor.from_pretrained():
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
Pass the audio array to the feature extractor. We also recommend adding the sampling_rate argument in the feature extractor in order to better debug any silent errors that may occur.
>>> audio_input = [dataset[0]["audio"]["array"]]
>>> feature_extractor(audio_input, sampling_rate=16000)
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ...,
5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]}
Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples:
>>> dataset[0]["audio"]["array"].shape
(173398,)
>>> dataset[1]["audio"]["array"].shape
(106496,)
Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it:
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays,
... sampling_rate=16000,
... padding=True,
... max_length=100000,
... truncation=True,
... )
... return inputs
Apply the preprocess_function to the the first few examples in the dataset:
>>> processed_dataset = preprocess_function(dataset[:5])
The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now!
>>> processed_dataset["input_values"][0].shape
(100000,)
>>> processed_dataset["input_values"][1].shape
(100000,)
Computer vision
For computer vision tasks, you’ll need an image processor to prepare your dataset for the model. Image preprocessing consists of several steps that convert images into the input expected by the model. These steps include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors.
Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation transform image data, but they serve different purposes:
Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations.
Image preprocessing guarantees that the images match the model’s expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained.
You can use any library you like for image augmentation. For image preprocessing, use the ImageProcessor associated with the model.
Load the food101 dataset (see the 🤗 Datasets tutorial for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets:
Use 🤗 Datasets split parameter to only load a small sample from the training split since the dataset is quite large!
>>> from datasets import load_dataset
>>> dataset = load_dataset("food101", split="train[:100]")
Next, take a look at the image with 🤗 Datasets Image feature:
Load the image processor with AutoImageProcessor.from_pretrained():
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
First, let’s add some image augmentation. You can use any library you prefer, but in this tutorial, we’ll use torchvision’s transforms module. If you’re interested in using another data augmentation library, learn how in the Albumentations or Kornia notebooks.
Here we use Compose to chain together a couple of transforms - RandomResizedCrop and ColorJitter. Note that for resizing, we can get the image size requirements from the image_processor. For some models, an exact height and width are expected, for others only the shortest_edge is defined.
>>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
>>> size = (
... image_processor.size["shortest_edge"]
... if "shortest_edge" in image_processor.size
... else (image_processor.size["height"], image_processor.size["width"])
... )
>>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
The model accepts pixel_values as its input. ImageProcessor can take care of normalizing the images, and generating appropriate tensors. Create a function that combines image augmentation and image preprocessing for a batch of images and generates pixel_values:
>>> def transforms(examples):
... images = [_transforms(img.convert("RGB")) for img in examples["image"]]
... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"]
... return examples
In the example above we set do_resize=False because we have already resized the images in the image augmentation transformation, and leveraged the size attribute from the appropriate image_processor. If you do not resize images during image augmentation, leave this parameter out. By default, ImageProcessor will handle the resizing.
If you wish to normalize images as a part of the augmentation transformation, use the image_processor.image_mean, and image_processor.image_std values.
Then use 🤗 Datasets set_transform to apply the transforms on the fly:
>>> dataset.set_transform(transforms)
Now when you access the image, you’ll notice the image processor has added pixel_values. You can pass your processed dataset to the model now!
Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it’s color properties are different.
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> img = dataset[0]["pixel_values"]
>>> plt.imshow(img.permute(1, 2, 0))
For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, ImageProcessor offers post processing methods. These methods convert model’s raw outputs into meaningful predictions such as bounding boxes, or segmentation maps.
Pad
In some cases, for instance, when fine-tuning DETR, the model applies scale augmentation at training time. This may cause images to be different sizes in a batch. You can use DetrImageProcessor.pad() from DetrImageProcessor and define a custom collate_fn to batch images together.
>>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch]
... batch = {}
... batch["pixel_values"] = encoding["pixel_values"]
... batch["pixel_mask"] = encoding["pixel_mask"]
... batch["labels"] = labels
... return batch
Multimodal
For tasks involving multimodal inputs, you’ll need a processor to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor.
Load the LJ Speech dataset (see the 🤗 Datasets tutorial for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR):
>>> from datasets import load_dataset
>>> lj_speech = load_dataset("lj_speech", split="train")
For ASR, you’re mainly focused on audio and text so you can remove the other columns:
>>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"])
Now take a look at the audio and text columns:
>>> lj_speech[0]["audio"]
{'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ...,
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav',
'sampling_rate': 22050}
>>> lj_speech[0]["text"]
'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition'
Remember you should always resample your audio dataset’s sampling rate to match the sampling rate of the dataset used to pretrain a model!
>>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000))
Load a processor with AutoProcessor.from_pretrained():
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
Create a function to process the audio data contained in array to input_values, and tokenize text to labels. These are the inputs to the model:
>>> def prepare_dataset(example):
... audio = example["audio"]
... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
... return example
Apply the prepare_dataset function to a sample:
>>> prepare_dataset(lj_speech[0])
The processor has now added input_values and labels, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now! |
https://huggingface.co/docs/transformers/autoclass_tutorial | Load pretrained instances with an AutoClass
With so many different Transformer architectures, it can be challenging to create one for your checkpoint. As a part of 🤗 Transformers core philosophy to make the library easy, simple and flexible to use, an AutoClass automatically infers and loads the correct architecture from a given checkpoint. The from_pretrained() method lets you quickly load a pretrained model for any architecture so you don’t have to devote time and resources to train a model from scratch. Producing this type of checkpoint-agnostic code means if your code works for one checkpoint, it will work with another checkpoint - as long as it was trained for a similar task - even if the architecture is different.
Remember, architecture refers to the skeleton of the model and checkpoints are the weights for a given architecture. For example, BERT is an architecture, while bert-base-uncased is a checkpoint. Model is a general term that can mean either architecture or checkpoint.
In this tutorial, learn to:
Load a pretrained tokenizer.
Load a pretrained image processor
Load a pretrained feature extractor.
Load a pretrained processor.
Load a pretrained model.
AutoTokenizer
Nearly every NLP task begins with a tokenizer. A tokenizer converts your input into a format that can be processed by the model.
Load a tokenizer with AutoTokenizer.from_pretrained():
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
Then tokenize your input as shown below:
>>> sequence = "In a hole in the ground there lived a hobbit."
>>> print(tokenizer(sequence))
{'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
AutoImageProcessor
For vision tasks, an image processor processes the image into the correct input format.
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
AutoFeatureExtractor
For audio tasks, a feature extractor processes the audio signal the correct input format.
Load a feature extractor with AutoFeatureExtractor.from_pretrained():
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(
... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
... )
AutoProcessor
Multimodal tasks require a processor that combines two types of preprocessing tools. For example, the LayoutLMV2 model requires an image processor to handle images and a tokenizer to handle text; a processor combines both of them.
Load a processor with AutoProcessor.from_pretrained():
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
AutoModel
Pytorch
Hide Pytorch content
Finally, the AutoModelFor classes let you load a pretrained model for a given task (see here for a complete list of available tasks). For example, load a model for sequence classification with AutoModelForSequenceClassification.from_pretrained():
>>> from transformers import AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
Easily reuse the same checkpoint to load an architecture for a different task:
>>> from transformers import AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased")
For PyTorch models, the from_pretrained() method uses torch.load() which internally uses pickle and is known to be insecure. In general, never load a model that could have come from an untrusted source, or that could have been tampered with. This security risk is partially mitigated for public models hosted on the Hugging Face Hub, which are scanned for malware at each commit. See the Hub documentation for best practices like signed commit verification with GPG.
TensorFlow and Flax checkpoints are not affected, and can be loaded within PyTorch architectures using the from_tf and from_flax kwargs for the from_pretrained method to circumvent this issue.
Generally, we recommend using the AutoTokenizer class and the AutoModelFor class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next tutorial, learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning.
TensorFlow
Hide TensorFlow content
Finally, the TFAutoModelFor classes let you load a pretrained model for a given task (see here for a complete list of available tasks). For example, load a model for sequence classification with TFAutoModelForSequenceClassification.from_pretrained():
>>> from transformers import TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
Easily reuse the same checkpoint to load an architecture for a different task:
>>> from transformers import TFAutoModelForTokenClassification
>>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased")
Generally, we recommend using the AutoTokenizer class and the TFAutoModelFor class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next tutorial, learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. |
https://huggingface.co/docs/transformers/installation | Installation
Install 🤗 Transformers for whichever deep learning library you’re working with, setup your cache, and optionally configure 🤗 Transformers to run offline.
🤗 Transformers is tested on Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, and Flax. Follow the installation instructions below for the deep learning library you are using:
PyTorch installation instructions.
TensorFlow 2.0 installation instructions.
Flax installation instructions.
Install with pip
You should install 🤗 Transformers in a virtual environment. If you’re unfamiliar with Python virtual environments, take a look at this guide. A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies.
Start by creating a virtual environment in your project directory:
Activate the virtual environment. On Linux and MacOs:
Activate Virtual environment on Windows
Now you’re ready to install 🤗 Transformers with the following command:
For CPU-support only, you can conveniently install 🤗 Transformers and a deep learning library in one line. For example, install 🤗 Transformers and PyTorch with:
pip install 'transformers[torch]'
🤗 Transformers and TensorFlow 2.0:
pip install 'transformers[tf-cpu]'
M1 / ARM Users
You will need to install the following before installing TensorFLow 2.0
brew install cmake
brew install pkg-config
🤗 Transformers and Flax:
pip install 'transformers[flax]'
Finally, check if 🤗 Transformers has been properly installed by running the following command. It will download a pretrained model:
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
Then print out the label and score:
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
Install from source
Install 🤗 Transformers from source with the following command:
pip install git+https://github.com/huggingface/transformers
This command installs the bleeding edge main version rather than the latest stable version. The main version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. However, this means the main version may not always be stable. We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an Issue so we can fix it even sooner!
Check if 🤗 Transformers has been properly installed by running the following command:
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
Editable install
You will need an editable install if you’d like to:
Use the main version of the source code.
Contribute to 🤗 Transformers and need to test changes in the code.
Clone the repository and install 🤗 Transformers with the following commands:
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.7/site-packages/, Python will also search the folder you cloned to: ~/transformers/.
You must keep the transformers folder if you want to keep using the library.
Now you can easily update your clone to the latest version of 🤗 Transformers with the following command:
cd ~/transformers/
git pull
Your Python environment will find the main version of 🤗 Transformers on the next run.
Install with conda
Install from the conda channel huggingface:
conda install -c huggingface transformers
Cache setup
Pretrained models are downloaded and locally cached at: ~/.cache/huggingface/hub. This is the default directory given by the shell environment variable TRANSFORMERS_CACHE. On Windows, the default directory is given by C:\Users\username\.cache\huggingface\hub. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory:
Shell environment variable (default): HUGGINGFACE_HUB_CACHE or TRANSFORMERS_CACHE.
Shell environment variable: HF_HOME.
Shell environment variable: XDG_CACHE_HOME + /huggingface.
🤗 Transformers will use the shell environment variables PYTORCH_TRANSFORMERS_CACHE or PYTORCH_PRETRAINED_BERT_CACHE if you are coming from an earlier iteration of this library and have set those environment variables, unless you specify the shell environment variable TRANSFORMERS_CACHE.
Offline mode
Run 🤗 Transformers in a firewalled or offline environment with locally cached files by setting the environment variable TRANSFORMERS_OFFLINE=1.
Add 🤗 Datasets to your offline training workflow with the environment variable HF_DATASETS_OFFLINE=1.
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
This script should run without hanging or waiting to timeout because it won’t attempt to download the model from the Hub.
You can also bypass loading a model from the Hub from each from_pretrained() call with the local_files_only parameter. When set to True, only local files are loaded:
from transformers import T5Model
model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True)
Fetch models and tokenizers to use offline
Another option for using 🤗 Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this:
Download a file through the user interface on the Model Hub by clicking on the ↓ icon.
Use the PreTrainedModel.from_pretrained() and PreTrainedModel.save_pretrained() workflow:
Download your files ahead of time with PreTrainedModel.from_pretrained():
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B")
Save your files to a specified directory with PreTrainedModel.save_pretrained():
>>> tokenizer.save_pretrained("./your/path/bigscience_t0")
>>> model.save_pretrained("./your/path/bigscience_t0")
Now when you’re offline, reload your files with PreTrainedModel.from_pretrained() from the specified directory:
>>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0")
>>> model = AutoModel.from_pretrained("./your/path/bigscience_t0")
Programmatically download files with the huggingface_hub library:
Install the huggingface_hub library in your virtual environment:
python -m pip install huggingface_hub
Use the hf_hub_download function to download a file to a specific path. For example, the following command downloads the config.json file from the T0 model to your desired path:
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0")
Once your file is downloaded and locally cached, specify it’s local path to load and use it:
>>> from transformers import AutoConfig
>>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json")
See the How to download files from the Hub section for more details on downloading files stored on the Hub. |
https://huggingface.co/docs/transformers/pipeline_tutorial | Pipelines for inference
The pipeline() makes it simple to use any model from the Hub for inference on any language, computer vision, speech, and multimodal tasks. Even if you don’t have experience with a specific modality or aren’t familiar with the underlying code behind the models, you can still use them for inference with the pipeline()! This tutorial will teach you to:
Use a pipeline() for inference.
Use a specific tokenizer or model.
Use a pipeline() for audio, vision, and multimodal tasks.
Take a look at the pipeline() documentation for a complete list of supported tasks and available parameters.
Pipeline usage
While each task has an associated pipeline(), it is simpler to use the general pipeline() abstraction which contains all the task-specific pipelines. The pipeline() automatically loads a default model and a preprocessing class capable of inference for your task. Let’s take the example of using the pipeline() for automatic speech recognition (ASR), or speech-to-text.
Start by creating a pipeline() and specify the inference task:
>>> from transformers import pipeline
>>> transcriber = pipeline(task="automatic-speech-recognition")
Pass your input to the pipeline(). In the case of speech recognition, this is an audio input file:
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'}
Not the result you had in mind? Check out some of the most downloaded automatic speech recognition models on the Hub to see if you can get a better transcription.
Let’s try the Whisper large-v2 model from OpenAI. Whisper was released 2 years later than Wav2Vec2, and was trained on close to 10x more data. As such, it beats Wav2Vec2 on most downstream benchmarks. It also has the added benefit of predicting punctuation and casing, neither of which are possible with
Wav2Vec2.
Let’s give it a try here to see how it performs:
>>> transcriber = pipeline(model="openai/whisper-large-v2")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
Now this result looks more accurate! For a deep-dive comparison on Wav2Vec2 vs Whisper, refer to the Audio Transformers Course. We really encourage you to check out the Hub for models in different languages, models specialized in your field, and more. You can check out and compare model results directly from your browser on the Hub to see if it fits or handles corner cases better than other ones. And if you don’t find a model for your use case, you can always start training your own!
If you have several inputs, you can pass your input as a list:
transcriber(
[
"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac",
"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac",
]
)
Pipelines are great for experimentation as switching from one model to another is trivial; however, there are some ways to optimize them for larger workloads than experimentation. See the following guides that dive into iterating over whole datasets or using pipelines in a webserver: of the docs:
Using pipelines on a dataset
Using pipelines for a webserver
Parameters
pipeline() supports many parameters; some are task specific, and some are general to all pipelines. In general, you can specify parameters anywhere you want:
transcriber = pipeline(model="openai/whisper-large-v2", my_parameter=1)
out = transcriber(...)
out = transcriber(..., my_parameter=2)
out = transcriber(...)
Let’s check out 3 important ones:
Device
If you use device=n, the pipeline automatically puts the model on the specified device. This will work regardless of whether you are using PyTorch or Tensorflow.
transcriber = pipeline(model="openai/whisper-large-v2", device=0)
If the model is too large for a single GPU and you are using PyTorch, you can set device_map="auto" to automatically determine how to load and store the model weights. Using the device_map argument requires the 🤗 Accelerate package:
pip install --upgrade accelerate
The following code automatically loads and stores model weights across devices:
transcriber = pipeline(model="openai/whisper-large-v2", device_map="auto")
Note that if device_map="auto" is passed, there is no need to add the argument device=device when instantiating your pipeline as you may encounter some unexpected behavior!
Batch size
By default, pipelines will not batch inference for reasons explained in detail here. The reason is that batching is not necessarily faster, and can actually be quite slower in some cases.
But if it works in your use case, you can use:
transcriber = pipeline(model="openai/whisper-large-v2", device=0, batch_size=2)
audio_filenames = [f"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/{i}.flac" for i in range(1, 5)]
texts = transcriber(audio_filenames)
This runs the pipeline on the 4 provided audio files, but it will pass them in batches of 2 to the model (which is on a GPU, where batching is more likely to help) without requiring any further code from you. The output should always match what you would have received without batching. It is only meant as a way to help you get more speed out of a pipeline.
Pipelines can also alleviate some of the complexities of batching because, for some pipelines, a single item (like a long audio file) needs to be chunked into multiple parts to be processed by a model. The pipeline performs this chunk batching for you.
Task specific parameters
All tasks provide task specific parameters which allow for additional flexibility and options to help you get your job done. For instance, the transformers.AutomaticSpeechRecognitionPipeline.call() method has a return_timestamps parameter which sounds promising for subtitling videos:
>>> transcriber = pipeline(model="openai/whisper-large-v2", return_timestamps=True)
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.', 'chunks': [{'timestamp': (0.0, 11.88), 'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its'}, {'timestamp': (11.88, 12.38), 'text': ' creed.'}]}
As you can see, the model inferred the text and also outputted when the various sentences were pronounced.
There are many parameters available for each task, so check out each task’s API reference to see what you can tinker with! For instance, the AutomaticSpeechRecognitionPipeline has a chunk_length_s parameter which is helpful for working on really long audio files (for example, subtitling entire movies or hour-long videos) that a model typically cannot handle on its own:
>>> transcriber = pipeline(model="openai/whisper-large-v2", chunk_length_s=30, return_timestamps=True)
>>> transcriber("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav")
{'text': " Chapter 16. I might have told you of the beginning of this liaison in a few lines, but I wanted you to see every step by which we came. I, too, agree to whatever Marguerite wished, Marguerite to be unable to live apart from me. It was the day after the evening...
If you can’t find a parameter that would really help you out, feel free to request it!
Using pipelines on a dataset
The pipeline can also run inference on a large dataset. The easiest way we recommend doing this is by using an iterator:
def data():
for i in range(1000):
yield f"My example {i}"
pipe = pipeline(model="gpt2", device=0)
generated_characters = 0
for out in pipe(data()):
generated_characters += len(out[0]["generated_text"])
The iterator data() yields each result, and the pipeline automatically recognizes the input is iterable and will start fetching the data while it continues to process it on the GPU (this uses DataLoader under the hood). This is important because you don’t have to allocate memory for the whole dataset and you can feed the GPU as fast as possible.
Since batching could speed things up, it may be useful to try tuning the batch_size parameter here.
The simplest way to iterate over a dataset is to just load one from 🤗 Datasets:
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]")
for out in pipe(KeyDataset(dataset, "audio")):
print(out)
Using pipelines for a webserver
Creating an inference engine is a complex topic which deserves it's own page.
Link
Vision pipeline
Using a pipeline() for vision tasks is practically identical.
Specify your task and pass your image to the classifier. The image can be a link, a local path or a base64-encoded image. For example, what species of cat is shown below?
>>> from transformers import pipeline
>>> vision_classifier = pipeline(model="google/vit-base-patch16-224")
>>> preds = vision_classifier(
... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
... )
>>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds]
>>> preds
[{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}]
Text pipeline
Using a pipeline() for NLP tasks is practically identical.
>>> from transformers import pipeline
>>>
>>>
>>> classifier = pipeline(model="facebook/bart-large-mnli")
>>> classifier(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]}
Multimodal pipeline
The pipeline() supports more than one modality. For example, a visual question answering (VQA) task combines text and image. Feel free to use any image link you like and a question you want to ask about the image. The image can be a URL or a local path to the image.
For example, if you use this invoice image:
>>> from transformers import pipeline
>>> vqa = pipeline(model="impira/layoutlm-document-qa")
>>> vqa(
... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
... question="What is the invoice number?",
... )
[{'score': 0.42515, 'answer': 'us-001', 'start': 16, 'end': 16}]
To run the example above you need to have pytesseract installed in addition to 🤗 Transformers:
sudo apt install -y tesseract-ocr
pip install pytesseract
Using pipeline on large models with 🤗 accelerate:
You can easily run pipeline on large models using 🤗 accelerate! First make sure you have installed accelerate with pip install accelerate.
First load your model using device_map="auto"! We will use facebook/opt-1.3b for our example.
import torch
from transformers import pipeline
pipe = pipeline(model="facebook/opt-1.3b", torch_dtype=torch.bfloat16, device_map="auto")
output = pipe("This is a cool example!", do_sample=True, top_p=0.95)
You can also pass 8-bit loaded models if you install bitsandbytes and add the argument load_in_8bit=True
import torch
from transformers import pipeline
pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"load_in_8bit": True})
output = pipe("This is a cool example!", do_sample=True, top_p=0.95)
Note that you can replace the checkpoint with any of the Hugging Face model that supports large model loading such as BLOOM! |
https://huggingface.co/docs/transformers/run_scripts | Train with a script
Along with the 🤗 Transformers notebooks, there are also example scripts demonstrating how to train a model for a task with PyTorch, TensorFlow, or JAX/Flax.
You will also find scripts we’ve used in our research projects and legacy examples which are mostly community contributed. These scripts are not actively maintained and require a specific version of 🤗 Transformers that will most likely be incompatible with the latest version of the library.
The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you’re trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case.
For any feature you’d like to implement in an example script, please discuss it on the forum or in an issue before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability.
This guide will show you how to run an example summarization training script in PyTorch and TensorFlow. All examples are expected to work with both frameworks unless otherwise specified.
Setup
To successfully run the latest version of the example scripts, you have to install 🤗 Transformers from source in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
For older versions of the example scripts, click on the toggle below:
Examples for older versions of 🤗 Transformers
v4.5.1
v4.4.2
v4.3.3
v4.2.2
v4.1.1
v4.0.1
v3.5.1
v3.4.0
v3.3.1
v3.2.0
v3.1.0
v3.0.2
v2.11.0
v2.10.0
v2.9.1
v2.8.0
v2.7.0
v2.6.0
v2.5.1
v2.4.0
v2.3.0
v2.2.0
v2.1.1
v2.0.0
v1.2.0
v1.1.0
v1.0.0
Then switch your current clone of 🤗 Transformers to a specific version, like v3.5.1 for example:
After you’ve setup the correct library version, navigate to the example folder of your choice and install the example specific requirements:
pip install -r requirements.txt
Run a script
Pytorch
Hide Pytorch content
The example script downloads and preprocesses a dataset from the 🤗 Datasets library. Then the script fine-tunes a dataset with the Trainer on an architecture that supports summarization. The following example shows how to fine-tune T5-small on the CNN/DailyMail dataset. The T5 model requires an additional source_prefix argument due to how it was trained. This prompt lets T5 know this is a summarization task.
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
TensorFlow
Hide TensorFlow content
The example script downloads and preprocesses a dataset from the 🤗 Datasets library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune T5-small on the CNN/DailyMail dataset. The T5 model requires an additional source_prefix argument due to how it was trained. This prompt lets T5 know this is a summarization task.
python examples/tensorflow/summarization/run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
Distributed training and mixed precision
The Trainer supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features:
Add the fp16 argument to enable mixed precision.
Set the number of GPUs to use with the nproc_per_node argument.
python -m torch.distributed.launch \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
TensorFlow scripts utilize a MirroredStrategy for distributed training, and you don’t need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available.
Run a script on a TPU
Pytorch
Hide Pytorch content
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the XLA deep learning compiler (see here for more details). To use a TPU, launch the xla_spawn.py script and use the num_cores argument to set the number of TPU cores you want to use.
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
TensorFlow
Hide TensorFlow content
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a TPUStrategy for training on TPUs. To use a TPU, pass the name of the TPU resource to the tpu argument.
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
Run a script with 🤗 Accelerate
🤗 Accelerate is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have 🤗 Accelerate installed if you don’t already have it:
Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts
pip install git+https://github.com/huggingface/accelerate
Instead of the run_summarization.py script, you need to use the run_summarization_no_trainer.py script. 🤗 Accelerate supported scripts will have a task_no_trainer.py file in the folder. Begin by running the following command to create and save a configuration file:
Test your setup to make sure it is configured correctly:
Now you are ready to launch the training:
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
Use a custom dataset
The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments:
train_file and validation_file specify the path to your training and validation files.
text_column is the input text to summarize.
summary_column is the target text to output.
A summarization script using a custom dataset would look like this:
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \
--text_column text_column_name \
--summary_column summary_column_name \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate
Test a script
It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples:
max_train_samples
max_eval_samples
max_predict_samples
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
Not all example scripts support the max_predict_samples argument. If you aren’t sure whether your script supports this argument, add the -h argument to check:
examples/pytorch/summarization/run_summarization.py -h
Resume training from checkpoint
Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint.
The first method uses the output_dir previous_output_dir argument to resume training from the latest checkpoint stored in output_dir. In this case, you should remove overwrite_output_dir:
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
The second method uses the resume_from_checkpoint path_to_specific_checkpoint argument to resume training from a specific checkpoint folder.
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--resume_from_checkpoint path_to_specific_checkpoint \
--predict_with_generate
Share your model
All scripts can upload your final model to the Model Hub. Make sure you are logged into Hugging Face before you begin:
Then add the push_to_hub argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in output_dir.
To give your repository a specific name, use the push_to_hub_model_id argument to add it. The repository will be automatically listed under your namespace.
The following example shows how to upload a model with a specific repository name:
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \
--push_to_hub_model_id finetuned-t5-cnn_dailymail \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate |
https://huggingface.co/docs/transformers/peft | Load adapters with 🤗 PEFT
Parameter-Efficient Fine Tuning (PEFT) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model.
Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them.
The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.
If you’re interested in learning more about the 🤗 PEFT library, check out the documentation.
Setup
Get started by installing 🤗 PEFT:
If you want to try out the brand new features, you might be interested in installing the library from source:
pip install git+https://github.com/huggingface/peft.git
Supported PEFT models
🤗 Transformers natively supports some PEFT methods, meaning you can load adapter weights stored locally or on the Hub and easily run or train them with a few lines of code. The following methods are supported:
Low Rank Adapters
IA3
AdaLoRA
If you want to use other PEFT methods, such as prompt learning or prompt tuning, or about the 🤗 PEFT library in general, please refer to the documentation.
Load a PEFT adapter
To load and use a PEFT adapter model from 🤗 Transformers, make sure the Hub repository or local directory contains an adapter_config.json file and the adapter weights, as shown in the example image above. Then you can load the PEFT adapter model using the AutoModelFor class. For example, to load a PEFT adapter model for causal language modeling:
specify the PEFT model id
pass it to the AutoModelForCausalLM class
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(peft_model_id)
You can load a PEFT adapter with either an AutoModelFor class or the base model class like OPTForCausalLM or LlamaForCausalLM.
You can also load a PEFT adapter by calling the load_adapter method:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "facebook/opt-350m"
peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
Load in 8bit or 4bit
The bitsandbytes integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the bitsandbytes integration guide to learn more). Add the load_in_8bit or load_in_4bit parameters to from_pretrained() and set device_map="auto" to effectively distribute the model to your hardware:
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "ybelkada/opt-350m-lora"
model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True)
Add a new adapter
You can use ~peft.PeftModel.add_adapter to add a new adapter to a model with an existing adapter as long as the new adapter is the same type as the current one. For example, if you have an existing LoRA adapter attached to a model:
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig
model_id = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=["q_proj", "k_proj"],
init_lora_weights=False
)
model.add_adapter(lora_config, adapter_name="adapter_1")
To add a new adapter:
model.add_adapter(lora_config, adapter_name="adapter_2")
Now you can use ~peft.PeftModel.set_adapter to set which adapter to use:
model.set_adapter("adapter_1")
output = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
model.set_adapter("adapter_2")
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
Enable and disable adapters
Once you’ve added an adapter to a model, you can enable or disable the adapter module. To enable the adapter module:
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig
model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = "Hello"
inputs = tokenizer(text, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(model_id)
peft_config = PeftConfig.from_pretrained(adapter_model_id)
peft_config.init_lora_weights = False
model.add_adapter(peft_config)
model.enable_adapters()
output = model.generate(**inputs)
To disable the adapter module:
model.disable_adapters()
output = model.generate(**inputs)
Train a PEFT adapter
PEFT adapters are supported by the Trainer class so that you can train an adapter for your specific use case. It only requires adding a few more lines of code. For example, to train a LoRA adapter:
If you aren’t familiar with fine-tuning a model with Trainer, take a look at the Fine-tune a pretrained model tutorial.
Define your adapter configuration with the task type and hyperparameters (see ~peft.LoraConfig for more details about what the hyperparameters do).
from peft import LoraConfig
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
Add adapter to the model.
model.add_adapter(peft_config)
Now you can pass the model to Trainer!
trainer = Trainer(model=model, ...)
trainer.train()
To save your trained adapter and load it back:
model.save_pretrained(save_dir)
model = AutoModelForCausalLM.from_pretrained(save_dir) |
https://huggingface.co/Writer/palmyra-med-20b | Palmyra-med-20b
Model description
Palmyra-Med-20b is a 20 billion parameter Large Language Model that has been uptrained on Palmyra-Large with a specialized custom-curated medical dataset. The main objective of this model is to enhance performance in tasks related to medical dialogue and question-answering.
Developed by: https://writer.com/;
Model type: Causal decoder-only;
Language(s) (NLP): English;
License: Apache 2.0;
Finetuned from model: Palmyra-Large.
Model Source
Palmyra-Med: Instruction-Based Fine-Tuning of LLMs Enhancing Medical Domain Performance
Uses
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Palmyra-Med-20B is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Palmyra-Med-20B to develop guardrails and to take appropriate precautions for any production use.
Usage
The model is compatible with the huggingface AutoModelForCausalLM and can be easily run on a single 40GB A100.
import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Writer/palmyra-med-20b" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto" torch_dtype=torch.float16, ) prompt = "Can you explain in simple terms how vaccines help our body fight diseases?" input_text = ( "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: {prompt} " "ASSISTANT:" ) model_inputs = tokenizer(input_text.format(prompt=prompt), return_tensors="pt").to( "cuda" ) gen_conf = { "temperature": 0.7, "repetition_penalty": 1.0, "max_new_tokens": 512, "do_sample": True, } out_tokens = model.generate(**model_inputs, **gen_conf) response_ids = out_tokens[0][len(model_inputs.input_ids[0]) :] output = tokenizer.decode(response_ids, skip_special_tokens=True) print(output) ## output ## # Vaccines stimulate the production of antibodies by the body's immune system. # Antibodies are proteins produced by B lymphocytes in response to foreign substances,such as viruses and bacteria. # The antibodies produced by the immune system can bind to and neutralize the pathogens, preventing them from invading and damaging the host cells. # Vaccines work by introducing antigens, which are components of the pathogen, into the body. # The immune system then produces antibodies against the antigens, which can recognize and neutralize the pathogen if it enters the body in the future. # The use of vaccines has led to a significant reduction in the incidence and severity of many diseases, including measles, mumps, rubella, and polio.
It can also be used with text-generation-inference
model=Writer/palmyra-med-20b volume=$PWD/data docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model
Dataset
For the fine-tuning of our LLMs, we used a custom-curated medical dataset that combines data from two publicly available sources: PubMedQA (Jin et al. 2019) and MedQA (Zhang et al. 2018).The PubMedQA dataset, which originated from the PubMed abstract database, consists of biomedical articles accompanied by corresponding question-answer pairs. In contrast, the MedQA dataset features medical questions and answers that are designed to assess the reasoning capabilities of medical question-answering systems. We prepared our custom dataset by merging and processing data from the aforementioned sources, maintaining the dataset mixture ratios detailed in Table 1. These ratios were consistent for finetuning both Palmyra-20b and Palmyra-40b models. Upon fine-tuning the models with this dataset, we refer to the resulting models as Palmyra-Med-20b and Palmyra-Med-40b, respectively.
Dataset Ratio Count
PubMedQA 75% 150,000
MedQA 25% 10,178
Evaluation
we present the findings of our experiments, beginning with the evaluation outcomes of the fine-tuned models and followed by a discussion of the base models’ performance on each of the evaluation datasets. Additionally, we report the progressive improvement of the Palmyra-Med-40b model throughout the training process on the PubMedQA dataset.
Model PubMedQA MedQA
Palmyra-20b 49.8 31.2
Palmyra-40b 64.8 43.1
Palmyra-Med-20b 75.6 44.6
Palmyra-Med-40b 81.1 72.4
Limitation
The model may not operate efficiently beyond the confines of the healthcare field. Since it has not been subjected to practical scenarios, its real-time efficacy and precision remain undetermined. Under no circumstances should it replace the advice of a medical professional, and it must be regarded solely as a tool for research purposes.
Citation and Related Information
To cite this model:
@misc{Palmyra-Med-20B, author = {Writer Engineering team}, title = {{Palmyra-Large Parameter Autoregressive Language Model}}, howpublished = {\url{https://dev.writer.com}}, year = 2023, month = March }
Contact
Hello@writer.com |
https://huggingface.co/docs/transformers/multilingual | Multilingual models for inference
There are several multilingual models in 🤗 Transformers, and their inference usage differs from monolingual models. Not all multilingual model usage is different though. Some models, like bert-base-multilingual-uncased, can be used just like a monolingual model. This guide will show you how to use multilingual models whose usage differs for inference.
XLM
XLM has ten different checkpoints, only one of which is monolingual. The nine remaining model checkpoints can be split into two categories: the checkpoints that use language embeddings and those that don’t.
XLM with language embeddings
The following XLM models use language embeddings to specify the language used at inference:
xlm-mlm-ende-1024 (Masked language modeling, English-German)
xlm-mlm-enfr-1024 (Masked language modeling, English-French)
xlm-mlm-enro-1024 (Masked language modeling, English-Romanian)
xlm-mlm-xnli15-1024 (Masked language modeling, XNLI languages)
xlm-mlm-tlm-xnli15-1024 (Masked language modeling + translation, XNLI languages)
xlm-clm-enfr-1024 (Causal language modeling, English-French)
xlm-clm-ende-1024 (Causal language modeling, English-German)
Language embeddings are represented as a tensor of the same shape as the input_ids passed to the model. The values in these tensors depend on the language used and are identified by the tokenizer’s lang2id and id2lang attributes.
In this example, load the xlm-clm-enfr-1024 checkpoint (Causal language modeling, English-French):
>>> import torch
>>> from transformers import XLMTokenizer, XLMWithLMHeadModel
>>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
>>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
The lang2id attribute of the tokenizer displays this model’s languages and their ids:
>>> print(tokenizer.lang2id)
{'en': 0, 'fr': 1}
Next, create an example input:
>>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")])
Set the language id as "en" and use it to define the language embedding. The language embedding is a tensor filled with 0 since that is the language id for English. This tensor should be the same size as input_ids.
>>> language_id = tokenizer.lang2id["en"]
>>> langs = torch.tensor([language_id] * input_ids.shape[1])
>>>
>>> langs = langs.view(1, -1)
Now you can pass the input_ids and language embedding to the model:
>>> outputs = model(input_ids, langs=langs)
The run_generation.py script can generate text with language embeddings using the xlm-clm checkpoints.
XLM without language embeddings
The following XLM models do not require language embeddings during inference:
xlm-mlm-17-1280 (Masked language modeling, 17 languages)
xlm-mlm-100-1280 (Masked language modeling, 100 languages)
These models are used for generic sentence representations, unlike the previous XLM checkpoints.
BERT
The following BERT models can be used for multilingual tasks:
bert-base-multilingual-uncased (Masked language modeling + Next sentence prediction, 102 languages)
bert-base-multilingual-cased (Masked language modeling + Next sentence prediction, 104 languages)
These models do not require language embeddings during inference. They should identify the language from the context and infer accordingly.
XLM-RoBERTa
The following XLM-RoBERTa models can be used for multilingual tasks:
xlm-roberta-base (Masked language modeling, 100 languages)
xlm-roberta-large (Masked language modeling, 100 languages)
XLM-RoBERTa was trained on 2.5TB of newly created and cleaned CommonCrawl data in 100 languages. It provides strong gains over previously released multilingual models like mBERT or XLM on downstream tasks like classification, sequence labeling, and question answering.
M2M100
The following M2M100 models can be used for multilingual translation:
facebook/m2m100_418M (Translation)
facebook/m2m100_1.2B (Translation)
In this example, load the facebook/m2m100_418M checkpoint to translate from Chinese to English. You can set the source language in the tokenizer:
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
>>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger."
>>> chinese_text = "不要插手巫師的事務, 因為他們是微妙的, 很快就會發怒."
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh")
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
Tokenize the text:
>>> encoded_zh = tokenizer(chinese_text, return_tensors="pt")
M2M100 forces the target language id as the first generated token to translate to the target language. Set the forced_bos_token_id to en in the generate method to translate to English:
>>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.'
MBart
The following MBart models can be used for multilingual translation:
facebook/mbart-large-50-one-to-many-mmt (One-to-many multilingual machine translation, 50 languages)
facebook/mbart-large-50-many-to-many-mmt (Many-to-many multilingual machine translation, 50 languages)
facebook/mbart-large-50-many-to-one-mmt (Many-to-one multilingual machine translation, 50 languages)
facebook/mbart-large-50 (Multilingual translation, 50 languages)
facebook/mbart-large-cc25
In this example, load the facebook/mbart-large-50-many-to-many-mmt checkpoint to translate Finnish to English. You can set the source language in the tokenizer:
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger."
>>> fi_text = "Älä sekaannu velhojen asioihin, sillä ne ovat hienovaraisia ja nopeasti vihaisia."
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
Tokenize the text:
>>> encoded_en = tokenizer(en_text, return_tensors="pt")
MBart forces the target language id as the first generated token to translate to the target language. Set the forced_bos_token_id to en in the generate method to translate to English:
>>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Don't interfere with the wizard's affairs, because they are subtle, will soon get angry."
If you are using the facebook/mbart-large-50-many-to-one-mmt checkpoint, you don’t need to force the target language id as the first generated token otherwise the usage is the same. |
https://huggingface.co/docs/transformers/model_sharing | Share a model
The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources.
In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the Model Hub:
Programmatically push your files to the Hub.
Drag-and-drop your files to the Hub with the web interface.
To share a model with the community, you need an account on huggingface.co. You can also join an existing organization or create a new one.
Repository features
Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences.
The Model Hub’s built-in versioning is based on git and git-lfs. In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows revisions, a method for pinning a specific version of a model with a commit hash, tag or branch.
As a result, you can load a specific model version with the revision parameter:
>>> model = AutoModel.from_pretrained(
... "julien-c/EsperBERTo-small", revision="v2.0.1"
... )
Files are also easily edited in a repository, and you can view the commit history as well as the difference:
Setup
Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder (~/.cache/ by default):
If you are using a notebook like Jupyter or Colaboratory, make sure you have the huggingface_hub library installed. This library allows you to programmatically interact with the Hub.
pip install huggingface_hub
Then use notebook_login to sign-in to the Hub, and follow the link here to generate a token to login with:
>>> from huggingface_hub import notebook_login
>>> notebook_login()
Convert a model for all frameworks
To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need to convert the checkpoint on-the-fly.
Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see here for installation instructions), and then find the specific model for your task in the other framework.
Pytorch
Hide Pytorch content
Specify from_tf=True to convert a checkpoint from TensorFlow to PyTorch:
>>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True)
>>> pt_model.save_pretrained("path/to/awesome-name-you-picked")
TensorFlow
Hide TensorFlow content
Specify from_pt=True to convert a checkpoint from PyTorch to TensorFlow:
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True)
Then you can save your new TensorFlow model with its new checkpoint:
>>> tf_model.save_pretrained("path/to/awesome-name-you-picked")
If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax:
>>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained(
... "path/to/awesome-name-you-picked", from_pt=True
... )
Push a model during training
Pytorch
Hide Pytorch content
Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the fine-tuning tutorial, the TrainingArguments class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set push_to_hub=True in your TrainingArguments:
>>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True)
Pass your training arguments as usual to Trainer:
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
After you fine-tune your model, call push_to_hub() on Trainer to push the trained model to the Hub. 🤗 Transformers will even automatically add training hyperparameters, training results and framework versions to your model card!
>>> trainer.push_to_hub()
TensorFlow
Hide TensorFlow content
Share a model to the Hub with PushToHubCallback. In the PushToHubCallback function, add:
An output directory for your model.
A tokenizer.
The hub_model_id, which is your Hub username and model name.
>>> from transformers import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model"
... )
Add the callback to fit, and 🤗 Transformers will push the trained model to the Hub:
>>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback)
Use the push_to_hub function
You can also call push_to_hub directly on your model to upload it to the Hub.
Specify your model name in push_to_hub:
>>> pt_model.push_to_hub("my-awesome-model")
This creates a repository under your username with the model name my-awesome-model. Users can now load your model with the from_pretrained function:
>>> from transformers import AutoModel
>>> model = AutoModel.from_pretrained("your_username/my-awesome-model")
If you belong to an organization and want to push your model under the organization name instead, just add it to the repo_id:
>>> pt_model.push_to_hub("my-awesome-org/my-awesome-model")
The push_to_hub function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository:
>>> tokenizer.push_to_hub("my-awesome-model")
Or perhaps you’d like to add the TensorFlow version of your fine-tuned PyTorch model:
>>> tf_model.push_to_hub("my-awesome-model")
Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the Files tab will display all the files you’ve uploaded to the repository.
For more details on how to create and upload files to a repository, refer to the Hub documentation here.
Upload with the web interface
Users who prefer a no-code approach are able to upload a model through the Hub’s web interface. Visit huggingface.co/new to create a new repository:
From here, add some information about your model:
Select the owner of the repository. This can be yourself or any of the organizations you belong to.
Pick a name for your model, which will also be the repository name.
Choose whether your model is public or private.
Specify the license usage for your model.
Now click on the Files tab and click on the Add file button to upload a new file to your repository. Then drag-and-drop a file to upload and add a commit message.
Add a model card
To make sure users understand your model’s capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the README.md file. You can add a model card by:
Manually creating and uploading a README.md file.
Clicking on the Edit model card button in your model repository.
Take a look at the DistilBert model card for a good example of the type of information a model card should include. For more details about other options you can control in the README.md file such as a model’s carbon footprint or widget examples, refer to the documentation here. |
https://huggingface.co/docs/transformers/transformers_agents | Transformers Agents
Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change.
Transformers version v4.29.0, building on the concept of tools and agents. You can play with in this colab.
In short, it provides a natural language API on top of transformers: we define a set of curated tools and design an agent to interpret natural language and to use these tools. It is extensible by design; we curated some relevant tools, but we’ll show you how the system can be extended easily to use any tool developed by the community.
Let’s start with a few examples of what can be achieved with this new API. It is particularly powerful when it comes to multimodal tasks, so let’s take it for a spin to generate images and read text out loud.
agent.run("Caption the following image", image=image)
Input Output
A beaver is swimming in the water
agent.run("Read the following text out loud", text=text)
Input Output
A beaver is swimming in the water your browser does not support the audio element.
agent.run(
"In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?",
document=document,
)
Input Output
ballroom foyer
Quickstart
Before being able to use agent.run, you will need to instantiate an agent, which is a large language model (LLM). We provide support for openAI models as well as opensource alternatives from BigCode and OpenAssistant. The openAI models perform better (but require you to have an openAI API key, so cannot be used for free); Hugging Face is providing free access to endpoints for BigCode and OpenAssistant models.
To start with, please install the agents extras in order to install all default dependencies.
pip install transformers[agents]
To use openAI models, you instantiate an OpenAiAgent after installing the openai dependency:
from transformers import OpenAiAgent
agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>")
To use BigCode or OpenAssistant, start by logging in to have access to the Inference API:
from huggingface_hub import login
login("<YOUR_TOKEN>")
Then, instantiate the agent
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
This is using the inference API that Hugging Face provides for free at the moment. If you have your own inference endpoint for this model (or another one) you can replace the URL above with your URL endpoint.
StarCoder and OpenAssistant are free to use and perform admirably well on simple tasks. However, the checkpoints don’t hold up when handling more complex prompts. If you’re facing such an issue, we recommend trying out the OpenAI model which, while sadly not open-source, performs better at this given time.
You’re now good to go! Let’s dive into the two APIs that you now have at your disposal.
Single execution (run)
The single execution method is when using the run() method of the agent:
agent.run("Draw me a picture of rivers and lakes.")
It automatically selects the tool (or tools) appropriate for the task you want to perform and runs them appropriately. It can perform one or several tasks in the same instruction (though the more complex your instruction, the more likely the agent is to fail).
agent.run("Draw me a picture of the sea then transform the picture to add an island")
Every run() operation is independent, so you can run it several times in a row with different tasks.
Note that your agent is just a large-language model, so small variations in your prompt might yield completely different results. It’s important to explain as clearly as possible the task you want to perform. We go more in-depth on how to write good prompts here.
If you’d like to keep a state across executions or to pass non-text objects to the agent, you can do so by specifying variables that you would like the agent to use. For example, you could generate the first image of rivers and lakes, and ask the model to update that picture to add an island by doing the following:
picture = agent.run("Generate a picture of rivers and lakes.")
updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture)
This can be helpful when the model is unable to understand your request and mixes tools. An example would be:
agent.run("Draw me the picture of a capybara swimming in the sea")
Here, the model could interpret in two ways:
Have the text-to-image generate a capybara swimming in the sea
Or, have the text-to-image generate capybara, then use the image-transformation tool to have it swim in the sea
In case you would like to force the first scenario, you could do so by passing it the prompt as an argument:
agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea")
Chat-based execution (chat)
The agent also has a chat-based approach, using the chat() method:
agent.chat("Generate a picture of rivers and lakes")
agent.chat("Transform the picture so that there is a rock in there")
This is an interesting approach when you want to keep the state across instructions. It’s better for experimentation, but will tend to be much better at single instructions rather than complex instructions (which the run() method is better at handling).
This method can also take arguments if you would like to pass non-text types or specific prompts.
⚠️ Remote execution
For demonstration purposes and so that it could be used with all setups, we had created remote executors for several of the default tools the agent has access for the release. These are created using inference endpoints.
We have turned these off for now, but in order to see how to set up remote executors tools yourself, we recommend reading the custom tool guide.
What's happening here? What are tools, and what are agents?
Agents
The “agent” here is a large language model, and we’re prompting it so that it has access to a specific set of tools.
LLMs are pretty good at generating small samples of code, so this API takes advantage of that by prompting the LLM gives a small sample of code performing a task with a set of tools. This prompt is then completed by the task you give your agent and the description of the tools you give it. This way it gets access to the doc of the tools you are using, especially their expected inputs and outputs, and can generate the relevant code.
Tools
Tools are very simple: they’re a single function, with a name, and a description. We then use these tools’ descriptions to prompt the agent. Through the prompt, we show the agent how it would leverage tools to perform what was requested in the query.
This is using brand-new tools and not pipelines, because the agent writes better code with very atomic tools. Pipelines are more refactored and often combine several tasks in one. Tools are meant to be focused on one very simple task only.
Code-execution?!
This code is then executed with our small Python interpreter on the set of inputs passed along with your tools. We hear you screaming “Arbitrary code execution!” in the back, but let us explain why that is not the case.
The only functions that can be called are the tools you provided and the print function, so you’re already limited in what can be executed. You should be safe if it’s limited to Hugging Face tools.
Then, we don’t allow any attribute lookup or imports (which shouldn’t be needed anyway for passing along inputs/outputs to a small set of functions) so all the most obvious attacks (and you’d need to prompt the LLM to output them anyway) shouldn’t be an issue. If you want to be on the super safe side, you can execute the run() method with the additional argument return_code=True, in which case the agent will just return the code to execute and you can decide whether to do it or not.
The execution will stop at any line trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.
A curated set of tools
We identify a set of tools that can empower such agents. Here is an updated list of the tools we have integrated in transformers:
Document question answering: given a document (such as a PDF) in image format, answer a question on this document (Donut)
Text question answering: given a long text and a question, answer the question in the text (Flan-T5)
Unconditional image captioning: Caption the image! (BLIP)
Image question answering: given an image, answer a question on this image (VILT)
Image segmentation: given an image and a prompt, output the segmentation mask of that prompt (CLIPSeg)
Speech to text: given an audio recording of a person talking, transcribe the speech into text (Whisper)
Text to speech: convert text to speech (SpeechT5)
Zero-shot text classification: given a text and a list of labels, identify to which label the text corresponds the most (BART)
Text summarization: summarize a long text in one or a few sentences (BART)
Translation: translate the text into a given language (NLLB)
These tools have an integration in transformers, and can be used manually as well, for example:
from transformers import load_tool
tool = load_tool("text-to-speech")
audio = tool("This is a text to speech tool")
Custom tools
While we identify a curated set of tools, we strongly believe that the main value provided by this implementation is the ability to quickly create and share custom tools.
By pushing the code of a tool to a Hugging Face Space or a model repository, you’re then able to leverage the tool directly with the agent. We’ve added a few transformers-agnostic tools to the huggingface-tools organization:
Text downloader: to download a text from a web URL
Text to image: generate an image according to a prompt, leveraging stable diffusion
Image transformation: modify an image given an initial image and a prompt, leveraging instruct pix2pix stable diffusion
Text to video: generate a small video according to a prompt, leveraging damo-vilab
The text-to-image tool we have been using since the beginning is a remote tool that lives in huggingface-tools/text-to-image! We will continue releasing such tools on this and other organizations, to further supercharge this implementation.
The agents have by default access to tools that reside on huggingface-tools. We explain how to you can write and share your tools as well as leverage any custom tool that resides on the Hub in following guide.
Code generation
So far we have shown how to use the agents to perform actions for you. However, the agent is only generating code that we then execute using a very restricted Python interpreter. In case you would like to use the code generated in a different setting, the agent can be prompted to return the code, along with tool definition and accurate imports.
For example, the following instruction
agent.run("Draw me a picture of rivers and lakes", return_code=True)
returns the following code
from transformers import load_tool
image_generator = load_tool("huggingface-tools/text-to-image")
image = image_generator(prompt="rivers and lakes")
that you can then modify and execute yourself. |
https://huggingface.co/docs/transformers/llm_tutorial | Generation with LLMs
LLMs, or Large Language Models, are the key component behind text generation. In a nutshell, they consist of large pretrained transformer models trained to predict the next word (or, more precisely, token) given some input text. Since they predict one token at a time, you need to do something more elaborate to generate new sentences other than just calling the model — you need to do autoregressive generation.
Autoregressive generation is the inference-time procedure of iteratively calling a model with its own generated outputs, given a few initial inputs. In 🤗 Transformers, this is handled by the generate() method, which is available to all models with generative capabilities.
This tutorial will show you how to:
Generate text with an LLM
Avoid common pitfalls
Next steps to help you get the most out of your LLM
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers bitsandbytes>=0.39.0 -q
Generate text
A language model trained for causal language modeling takes a sequence of text tokens as input and returns the probability distribution for the next token.
"Forward pass of an LLM"
A critical aspect of autoregressive generation with LLMs is how to select the next token from this probability distribution. Anything goes in this step as long as you end up with a token for the next iteration. This means it can be as simple as selecting the most likely token from the probability distribution or as complex as applying a dozen transformations before sampling from the resulting distribution.
"Autoregressive generation iteratively selects the next token from a probability distribution to generate text"
The process depicted above is repeated iteratively until some stopping condition is reached. Ideally, the stopping condition is dictated by the model, which should learn when to output an end-of-sequence (EOS) token. If this is not the case, generation stops when some predefined maximum length is reached.
Properly setting up the token selection step and the stopping condition is essential to make your model behave as you’d expect on your task. That is why we have a GenerationConfig file associated with each model, which contains a good default generative parameterization and is loaded alongside your model.
Let’s talk code!
If you’re interested in basic LLM usage, our high-level Pipeline interface is a great starting point. However, LLMs often require advanced features like quantization and fine control of the token selection step, which is best done through generate(). Autoregressive generation with LLMs is also resource-intensive and should be executed on a GPU for adequate throughput.
First, you need to load the model.
>>> from transformers import AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained(
... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True
... )
You’ll notice two flags in the from_pretrained call:
device_map ensures the model is moved to your GPU(s)
load_in_4bit applies 4-bit dynamic quantization to massively reduce the resource requirements
There are other ways to initialize a model, but this is a good baseline to begin with an LLM.
Next, you need to preprocess your text input with a tokenizer.
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b")
>>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to("cuda")
The model_inputs variable holds the tokenized text input, as well as the attention mask. While generate() does its best effort to infer the attention mask when it is not passed, we recommend passing it whenever possible for optimal results.
Finally, call the generate() method to returns the generated tokens, which should be converted to text before printing.
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A list of colors: red, blue, green, yellow, black, white, and brown'
And that’s it! In a few lines of code, you can harness the power of an LLM.
Common pitfalls
There are many generation strategies, and sometimes the default values may not be appropriate for your use case. If your outputs aren’t aligned with what you’re expecting, we’ve created a list of the most common pitfalls and how to avoid them.
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b")
>>> tokenizer.pad_token = tokenizer.eos_token
>>> model = AutoModelForCausalLM.from_pretrained(
... "openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True
... )
Generated output is too short/long
If not specified in the GenerationConfig file, generate returns up to 20 tokens by default. We highly recommend manually setting max_new_tokens in your generate call to control the maximum number of new tokens it can return. Keep in mind LLMs (more precisely, decoder-only models) also return the input prompt as part of the output.
>>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda")
>>>
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5'
>>>
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=50)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,'
Incorrect generation mode
By default, and unless specified in the GenerationConfig file, generate selects the most likely token at each iteration (greedy decoding). Depending on your task, this may be undesirable; creative tasks like chatbots or writing an essay benefit from sampling. On the other hand, input-grounded tasks like audio transcription or translation benefit from greedy decoding. Enable sampling with do_sample=True, and you can learn more about this topic in this blog post.
>>>
>>> from transformers import set_seed
>>> set_seed(0)
>>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda")
>>>
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat. I am a cat. I am a cat. I am a cat'
>>>
>>> generated_ids = model.generate(**model_inputs, do_sample=True)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'I am a cat.\nI just need to be. I am always.\nEvery time'
Wrong padding side
LLMs are decoder-only architectures, meaning they continue to iterate on your input prompt. If your inputs do not have the same length, they need to be padded. Since LLMs are not trained to continue from pad tokens, your input needs to be left-padded. Make sure you also don’t forget to pass the attention mask to generate!
>>>
>>>
>>> model_inputs = tokenizer(
... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0]
''
>>>
>>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", padding_side="left")
>>> tokenizer.pad_token = tokenizer.eos_token
>>> model_inputs = tokenizer(
... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt"
... ).to("cuda")
>>> generated_ids = model.generate(**model_inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
'1, 2, 3, 4, 5, 6,'
Further resources
While the autoregressive generation process is relatively straightforward, making the most out of your LLM can be a challenging endeavor because there are many moving parts. For your next steps to help you dive deeper into LLM usage and understanding:
Advanced generate usage
Guide on how to control different generation methods, how to set up the generation configuration file, and how to stream the output;
API reference on GenerationConfig, generate(), and generate-related classes.
LLM leaderboards
Open LLM Leaderboard, which focuses on the quality of the open-source models;
Open LLM-Perf Leaderboard, which focuses on LLM throughput.
Latency and throughput
Guide on dynamic quantization, which shows you how to drastically reduce your memory requirements.
Related libraries
text-generation-inference, a production-ready server for LLMs;
optimum, an extension of 🤗 Transformers that optimizes for specific hardware devices. |
https://huggingface.co/docs/transformers/custom_models | Sharing custom models
The 🤗 Transformers library is designed to be easily extensible. Every model is fully coded in a given subfolder of the repository with no abstraction, so you can easily copy a modeling file and tweak it to your needs.
If you are writing a brand new model, it might be easier to start from scratch. In this tutorial, we will show you how to write a custom model and its configuration so it can be used inside Transformers, and how you can share it with the community (with the code it relies on) so that anyone can use it, even if it’s not present in the 🤗 Transformers library.
We will illustrate all of this on a ResNet model, by wrapping the ResNet class of the timm library into a PreTrainedModel.
Writing a custom configuration
Before we dive into the model, let’s first write its configuration. The configuration of a model is an object that will contain all the necessary information to build the model. As we will see in the next section, the model can only take a config to be initialized, so we really need that object to be as complete as possible.
In our example, we will take a couple of arguments of the ResNet class that we might want to tweak. Different configurations will then give us the different types of ResNets that are possible. We then just store those arguments, after checking the validity of a few of them.
from transformers import PretrainedConfig
from typing import List
class ResnetConfig(PretrainedConfig):
model_type = "resnet"
def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ):
if block_type not in ["basic", "bottleneck"]:
raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.")
if stem_type not in ["", "deep", "deep-tiered"]:
raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.")
self.block_type = block_type
self.layers = layers
self.num_classes = num_classes
self.input_channels = input_channels
self.cardinality = cardinality
self.base_width = base_width
self.stem_width = stem_width
self.stem_type = stem_type
self.avg_down = avg_down
super().__init__(**kwargs)
The three important things to remember when writing you own configuration are the following:
you have to inherit from PretrainedConfig,
the __init__ of your PretrainedConfig must accept any kwargs,
those kwargs need to be passed to the superclass __init__.
The inheritance is to make sure you get all the functionality from the 🤗 Transformers library, while the two other constraints come from the fact a PretrainedConfig has more fields than the ones you are setting. When reloading a config with the from_pretrained method, those fields need to be accepted by your config and then sent to the superclass.
Defining a model_type for your configuration (here model_type="resnet") is not mandatory, unless you want to register your model with the auto classes (see last section).
With this done, you can easily create and save your configuration like you would do with any other model config of the library. Here is how we can create a resnet50d config and save it:
resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True)
resnet50d_config.save_pretrained("custom-resnet")
This will save a file named config.json inside the folder custom-resnet. You can then reload your config with the from_pretrained method:
resnet50d_config = ResnetConfig.from_pretrained("custom-resnet")
You can also use any other method of the PretrainedConfig class, like push_to_hub() to directly upload your config to the Hub.
Writing a custom model
Now that we have our ResNet configuration, we can go on writing the model. We will actually write two: one that extracts the hidden features from a batch of images (like BertModel) and one that is suitable for image classification (like BertForSequenceClassification).
As we mentioned before, we’ll only write a loose wrapper of the model to keep it simple for this example. The only thing we need to do before writing this class is a map between the block types and actual block classes. Then the model is defined from the configuration by passing everything to the ResNet class:
from transformers import PreTrainedModel
from timm.models.resnet import BasicBlock, Bottleneck, ResNet
from .configuration_resnet import ResnetConfig
BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck}
class ResnetModel(PreTrainedModel):
config_class = ResnetConfig
def __init__(self, config):
super().__init__(config)
block_layer = BLOCK_MAPPING[config.block_type]
self.model = ResNet(
block_layer,
config.layers,
num_classes=config.num_classes,
in_chans=config.input_channels,
cardinality=config.cardinality,
base_width=config.base_width,
stem_width=config.stem_width,
stem_type=config.stem_type,
avg_down=config.avg_down,
)
def forward(self, tensor):
return self.model.forward_features(tensor)
For the model that will classify images, we just change the forward method:
import torch
class ResnetModelForImageClassification(PreTrainedModel):
config_class = ResnetConfig
def __init__(self, config):
super().__init__(config)
block_layer = BLOCK_MAPPING[config.block_type]
self.model = ResNet(
block_layer,
config.layers,
num_classes=config.num_classes,
in_chans=config.input_channels,
cardinality=config.cardinality,
base_width=config.base_width,
stem_width=config.stem_width,
stem_type=config.stem_type,
avg_down=config.avg_down,
)
def forward(self, tensor, labels=None):
logits = self.model(tensor)
if labels is not None:
loss = torch.nn.cross_entropy(logits, labels)
return {"loss": loss, "logits": logits}
return {"logits": logits}
In both cases, notice how we inherit from PreTrainedModel and call the superclass initialization with the config (a bit like when you write a regular torch.nn.Module). The line that sets the config_class is not mandatory, unless you want to register your model with the auto classes (see last section).
If your model is very similar to a model inside the library, you can re-use the same configuration as this model.
You can have your model return anything you want, but returning a dictionary like we did for ResnetModelForImageClassification, with the loss included when labels are passed, will make your model directly usable inside the Trainer class. Using another output format is fine as long as you are planning on using your own training loop or another library for training.
Now that we have our model class, let’s create one:
resnet50d = ResnetModelForImageClassification(resnet50d_config)
Again, you can use any of the methods of PreTrainedModel, like save_pretrained() or push_to_hub(). We will use the second in the next section, and see how to push the model weights with the code of our model. But first, let’s load some pretrained weights inside our model.
In your own use case, you will probably be training your custom model on your own data. To go fast for this tutorial, we will use the pretrained version of the resnet50d. Since our model is just a wrapper around it, it’s going to be easy to transfer those weights:
import timm
pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
Now let’s see how to make sure that when we do save_pretrained() or push_to_hub(), the code of the model is saved.
Sending the code to the Hub
This API is experimental and may have some slight breaking changes in the next releases.
First, make sure your model is fully defined in a .py file. It can rely on relative imports to some other files as long as all the files are in the same directory (we don’t support submodules for this feature yet). For our example, we’ll define a modeling_resnet.py file and a configuration_resnet.py file in a folder of the current working directory named resnet_model. The configuration file contains the code for ResnetConfig and the modeling file contains the code of ResnetModel and ResnetModelForImageClassification.
.
└── resnet_model
├── __init__.py
├── configuration_resnet.py
└── modeling_resnet.py
The __init__.py can be empty, it’s just there so that Python detects resnet_model can be use as a module.
If copying a modeling files from the library, you will need to replace all the relative imports at the top of the file to import from the transformers package.
Note that you can re-use (or subclass) an existing configuration/model.
To share your model with the community, follow those steps: first import the ResNet model and config from the newly created files:
from resnet_model.configuration_resnet import ResnetConfig
from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification
Then you have to tell the library you want to copy the code files of those objects when using the save_pretrained method and properly register them with a given Auto class (especially for models), just run:
ResnetConfig.register_for_auto_class()
ResnetModel.register_for_auto_class("AutoModel")
ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification")
Note that there is no need to specify an auto class for the configuration (there is only one auto class for them, AutoConfig) but it’s different for models. Your custom model could be suitable for many different tasks, so you have to specify which one of the auto classes is the correct one for your model.
Next, let’s create the config and models as we did before:
resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True)
resnet50d = ResnetModelForImageClassification(resnet50d_config)
pretrained_model = timm.create_model("resnet50d", pretrained=True)
resnet50d.model.load_state_dict(pretrained_model.state_dict())
Now to send the model to the Hub, make sure you are logged in. Either run in your terminal:
or from a notebook:
from huggingface_hub import notebook_login
notebook_login()
You can then push to your own namespace (or an organization you are a member of) like this:
resnet50d.push_to_hub("custom-resnet50d")
On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration .py files in the folder custom-resnet50d and uploaded the result to the Hub. You can check the result in this model repo.
See the sharing tutorial for more information on the push to Hub method.
Using a model with custom code
You can use any configuration, model or tokenizer with custom code files in its repository with the auto-classes and the from_pretrained method. All files and code uploaded to the Hub are scanned for malware (refer to the Hub security documentation for more information), but you should still review the model code and author to avoid executing malicious code on your machine. Set trust_remote_code=True to use a model with custom code:
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True)
It is also strongly encouraged to pass a commit hash as a revision to make sure the author of the models did not update the code with some malicious new lines (unless you fully trust the authors of the models).
commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292"
model = AutoModelForImageClassification.from_pretrained(
"sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash
)
Note that when browsing the commit history of the model repo on the Hub, there is a button to easily copy the commit hash of any commit.
Registering a model with custom code to the auto classes
If you are writing a library that extends 🤗 Transformers, you may want to extend the auto classes to include your own model. This is different from pushing the code to the Hub in the sense that users will need to import your library to get the custom models (contrarily to automatically downloading the model code from the Hub).
As long as your config has a model_type attribute that is different from existing model types, and that your model classes have the right config_class attributes, you can just add them to the auto classes like this:
from transformers import AutoConfig, AutoModel, AutoModelForImageClassification
AutoConfig.register("resnet", ResnetConfig)
AutoModel.register(ResnetConfig, ResnetModel)
AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification)
Note that the first argument used when registering your custom config to AutoConfig needs to match the model_type of your custom config, and the first argument used when registering your custom models to any auto model class needs to match the config_class of those models. |
https://huggingface.co/docs/transformers/accelerate | Distributed training with 🤗 Accelerate
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the 🤗 Accelerate library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU’s on one machine or multiple GPU’s across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment.
Setup
Get started by installing 🤗 Accelerate:
Then import and create an Accelerator object. The Accelerator will automatically detect your type of distributed setup and initialize all the necessary components for training. You don’t need to explicitly place your model on a device.
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
Prepare to accelerate
The next step is to pass all the relevant training objects to the prepare method. This includes your training and evaluation DataLoaders, a model and an optimizer:
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
Backward
The last addition is to replace the typical loss.backward() in your training loop with 🤗 Accelerate’s backwardmethod:
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss)
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training!
+ from accelerate import Accelerator
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
+ accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=3e-5)
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+ train_dataloader, eval_dataloader, model, optimizer
+ )
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
- batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
Train
Once you’ve added the relevant lines of code, launch your training in a script or a notebook like Colaboratory.
Train with a script
If you are running your training from a script, run the following command to create and save a configuration file:
Then launch your training with:
accelerate launch train.py
Train with a notebook
🤗 Accelerate can also run in a notebook if you’re planning on using Colaboratory’s TPUs. Wrap all the code responsible for training in a function, and pass it to notebook_launcher:
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
For more information about 🤗 Accelerate and its rich features, refer to the documentation. |
https://huggingface.co/docs/transformers/sagemaker | Transformers documentation
Run training on Amazon SageMaker
Join the Hugging Face community
and get access to the augmented documentation experience
Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes |
https://huggingface.co/docs/transformers/fast_tokenizers | Use tokenizers from 🤗 Tokenizers
The PreTrainedTokenizerFast depends on the 🤗 Tokenizers library. The tokenizers obtained from the 🤗 Tokenizers library can be loaded very simply into 🤗 Transformers.
Before getting in the specifics, let’s first start by creating a dummy tokenizer in a few lines:
>>> from tokenizers import Tokenizer
>>> from tokenizers.models import BPE
>>> from tokenizers.trainers import BpeTrainer
>>> from tokenizers.pre_tokenizers import Whitespace
>>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
>>> tokenizer.pre_tokenizer = Whitespace()
>>> files = [...]
>>> tokenizer.train(files, trainer)
We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use.
Loading directly from the tokenizer object
Let’s see how to leverage this tokenizer object in the 🤗 Transformers library. The PreTrainedTokenizerFast class allows for easy instantiation, by accepting the instantiated tokenizer object as an argument:
>>> from transformers import PreTrainedTokenizerFast
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to the tokenizer page for more information.
Loading from a JSON file
In order to load a tokenizer from a JSON file, let’s first start by saving our tokenizer:
>>> tokenizer.save("tokenizer.json")
The path to which we saved this file can be passed to the PreTrainedTokenizerFast initialization method using the tokenizer_file parameter:
>>> from transformers import PreTrainedTokenizerFast
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to the tokenizer page for more information. |
https://huggingface.co/docs/transformers/serialization | Export to ONNX
Deploying 🤗 Transformers models in production environments often requires, or can benefit from exporting the models into a serialized format that can be loaded and executed on specialized runtimes and hardware.
🤗 Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats such as ONNX and TFLite through its exporters module. 🤗 Optimum also provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency.
This guide demonstrates how you can export 🤗 Transformers models to ONNX with 🤗 Optimum, for the guide on exporting models to TFLite, please refer to the Export to TFLite page.
Export to ONNX
ONNX (Open Neural Network eXchange) is an open standard that defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an intermediate representation) which represents the flow of data through the neural network.
By exposing a graph with standardized operators and data types, ONNX makes it easy to switch between frameworks. For example, a model trained in PyTorch can be exported to ONNX format and then imported in TensorFlow (and vice versa).
Once exported to ONNX format, a model can be:
optimized for inference via techniques such as graph optimization and quantization.
run with ONNX Runtime via ORTModelForXXX classes, which follow the same AutoModel API as the one you are used to in 🤗 Transformers.
run with optimized inference pipelines, which has the same API as the pipeline() function in 🤗 Transformers.
🤗 Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come ready-made for a number of model architectures, and are designed to be easily extendable to other architectures.
For the list of ready-made configurations, please refer to 🤗 Optimum documentation.
There are two ways to export a 🤗 Transformers model to ONNX, here we show both:
export with 🤗 Optimum via CLI.
export with 🤗 Optimum with optimum.onnxruntime.
Exporting a 🤗 Transformers model to ONNX with CLI
To export a 🤗 Transformers model to ONNX, first install an extra dependency:
pip install optimum[exporters]
To check out all available arguments, refer to the 🤗 Optimum docs, or view help in command line:
optimum-cli export onnx --help
To export a model’s checkpoint from the 🤗 Hub, for example, distilbert-base-uncased-distilled-squad, run the following command:
optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/
You should see the logs indicating progress and showing where the resulting model.onnx is saved, like this:
Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx...
-[✓] ONNX model output names match reference model (start_logits, end_logits)
- Validating ONNX Model output "start_logits":
-[✓] (2, 16) matches (2, 16)
-[✓] all values close (atol: 0.0001)
- Validating ONNX Model output "end_logits":
-[✓] (2, 16) matches (2, 16)
-[✓] all values close (atol: 0.0001)
The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx
The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model’s weights and tokenizer files in the same directory (local_path). When using CLI, pass the local_path to the model argument instead of the checkpoint name on 🤗 Hub and provide the --task argument. You can review the list of supported tasks in the 🤗 Optimum documentation. If task argument is not provided, it will default to the model architecture without any task specific head.
optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/
The resulting model.onnx file can then be run on one of the many accelerators that support the ONNX standard. For example, we can load and run the model with ONNX Runtime as follows:
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt")
>>> outputs = model(**inputs)
The process is identical for TensorFlow checkpoints on the Hub. For instance, here’s how you would export a pure TensorFlow checkpoint from the Keras organization:
optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/
Exporting a 🤗 Transformers model to ONNX with optimum.onnxruntime
Alternative to CLI, you can export a 🤗 Transformers model to ONNX programmatically like so:
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer
>>> model_checkpoint = "distilbert_base_uncased_squad"
>>> save_directory = "onnx/"
>>>
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
>>>
>>> ort_model.save_pretrained(save_directory)
>>> tokenizer.save_pretrained(save_directory)
Exporting a model for an unsupported architecture
If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is supported in optimum.exporters.onnx, and if it is not, contribute to 🤗 Optimum directly.
Exporting a model with transformers.onnx
tranformers.onnx is no longer maintained, please export models with 🤗 Optimum as described above. This section will be removed in the future versions.
To export a 🤗 Transformers model to ONNX with tranformers.onnx, install extra dependencies:
pip install transformers[onnx]
Use transformers.onnx package as a Python module to export a checkpoint using a ready-made configuration:
python -m transformers.onnx --model=distilbert-base-uncased onnx/
This exports an ONNX graph of the checkpoint defined by the --model argument. Pass any checkpoint on the 🤗 Hub or one that’s stored locally. The resulting model.onnx file can then be run on one of the many accelerators that support the ONNX standard. For example, load and run the model with ONNX Runtime as follows:
>>> from transformers import AutoTokenizer
>>> from onnxruntime import InferenceSession
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
>>> session = InferenceSession("onnx/model.onnx")
>>>
>>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
>>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
The required output names (like ["last_hidden_state"]) can be obtained by taking a look at the ONNX configuration of each model. For example, for DistilBERT we have:
>>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig
>>> config = DistilBertConfig()
>>> onnx_config = DistilBertOnnxConfig(config)
>>> print(list(onnx_config.outputs.keys()))
["last_hidden_state"]
The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so:
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
To export a model that’s stored locally, save the model’s weights and tokenizer files in the same directory (e.g. local-pt-checkpoint), then export it to ONNX by pointing the --model argument of the transformers.onnx package to the desired directory:
python -m transformers.onnx --model=local-pt-checkpoint onnx/ |
https://huggingface.co/docs/transformers/benchmarks | Benchmarks
Hugging Face’s Benchmarking tools are deprecated and it is advised to use external Benchmarking libraries to measure the speed and memory complexity of Transformer models.
Let’s take a look at how 🤗 Transformers models can be benchmarked, best practices, and already available benchmarks.
A notebook explaining in more detail how to benchmark 🤗 Transformers models can be found here.
How to benchmark 🤗 Transformers models
The classes PyTorchBenchmark and TensorFlowBenchmark allow to flexibly benchmark 🤗 Transformers models. The benchmark classes allow us to measure the peak memory usage and required time for both inference and training.
Hereby, inference is defined by a single forward pass, and training is defined by a single forward pass and backward pass.
The benchmark classes PyTorchBenchmark and TensorFlowBenchmark expect an object of type PyTorchBenchmarkArguments and TensorFlowBenchmarkArguments, respectively, for instantiation. PyTorchBenchmarkArguments and TensorFlowBenchmarkArguments are data classes and contain all relevant configurations for their corresponding benchmark class. In the following example, it is shown how a BERT model of type bert-base-cased can be benchmarked.
Pytorch
Hide Pytorch content
>>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
>>> args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512])
>>> benchmark = PyTorchBenchmark(args)
TensorFlow
Hide TensorFlow content
>>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments
>>> args = TensorFlowBenchmarkArguments(
... models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> benchmark = TensorFlowBenchmark(args)
Here, three arguments are given to the benchmark argument data classes, namely models, batch_sizes, and sequence_lengths. The argument models is required and expects a list of model identifiers from the model hub The list arguments batch_sizes and sequence_lengths define the size of the input_ids on which the model is benchmarked. There are many more parameters that can be configured via the benchmark argument data classes. For more detail on these one can either directly consult the files src/transformers/benchmark/benchmark_args_utils.py, src/transformers/benchmark/benchmark_args.py (for PyTorch) and src/transformers/benchmark/benchmark_args_tf.py (for Tensorflow). Alternatively, running the following shell commands from root will print out a descriptive list of all configurable parameters for PyTorch and Tensorflow respectively.
Pytorch
Hide Pytorch content
python examples/pytorch/benchmarking/run_benchmark.py --help
An instantiated benchmark object can then simply be run by calling benchmark.run().
>>> results = benchmark.run()
>>> print(results)
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 8 8 0.006
bert-base-uncased 8 32 0.006
bert-base-uncased 8 128 0.018
bert-base-uncased 8 512 0.088
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-uncased 8 8 1227
bert-base-uncased 8 32 1281
bert-base-uncased 8 128 1307
bert-base-uncased 8 512 1539
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.4.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 08:58:43.371351
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
TensorFlow
Hide TensorFlow content
python examples/tensorflow/benchmarking/run_benchmark_tf.py --help
An instantiated benchmark object can then simply be run by calling benchmark.run().
>>> results = benchmark.run()
>>> print(results)
>>> results = benchmark.run()
>>> print(results)
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 8 8 0.005
bert-base-uncased 8 32 0.008
bert-base-uncased 8 128 0.022
bert-base-uncased 8 512 0.105
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-uncased 8 8 1330
bert-base-uncased 8 32 1330
bert-base-uncased 8 128 1330
bert-base-uncased 8 512 1770
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: Tensorflow
- use_xla: False
- framework_version: 2.2.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:26:35.617317
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
By default, the time and the required memory for inference are benchmarked. In the example output above the first two sections show the result corresponding to inference time and inference memory. In addition, all relevant information about the computing environment, e.g. the GPU type, the system, the library versions, etc… are printed out in the third section under ENVIRONMENT INFORMATION. This information can optionally be saved in a .csv file when adding the argument save_to_csv=True to PyTorchBenchmarkArguments and TensorFlowBenchmarkArguments respectively. In this case, every section is saved in a separate .csv file. The path to each .csv file can optionally be defined via the argument data classes.
Instead of benchmarking pre-trained models via their model identifier, e.g. bert-base-uncased, the user can alternatively benchmark an arbitrary configuration of any available model class. In this case, a list of configurations must be inserted with the benchmark args as follows.
Pytorch
Hide Pytorch content
>>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig
>>> args = PyTorchBenchmarkArguments(
... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> config_base = BertConfig()
>>> config_384_hid = BertConfig(hidden_size=384)
>>> config_6_lay = BertConfig(num_hidden_layers=6)
>>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
>>> benchmark.run()
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base 8 128 0.006
bert-base 8 512 0.006
bert-base 8 128 0.018
bert-base 8 512 0.088
bert-384-hid 8 8 0.006
bert-384-hid 8 32 0.006
bert-384-hid 8 128 0.011
bert-384-hid 8 512 0.054
bert-6-lay 8 8 0.003
bert-6-lay 8 32 0.004
bert-6-lay 8 128 0.009
bert-6-lay 8 512 0.044
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base 8 8 1277
bert-base 8 32 1281
bert-base 8 128 1307
bert-base 8 512 1539
bert-384-hid 8 8 1005
bert-384-hid 8 32 1027
bert-384-hid 8 128 1035
bert-384-hid 8 512 1255
bert-6-lay 8 8 1097
bert-6-lay 8 32 1101
bert-6-lay 8 128 1127
bert-6-lay 8 512 1359
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.4.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:35:25.143267
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
TensorFlow
Hide TensorFlow content
>>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig
>>> args = TensorFlowBenchmarkArguments(
... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]
... )
>>> config_base = BertConfig()
>>> config_384_hid = BertConfig(hidden_size=384)
>>> config_6_lay = BertConfig(num_hidden_layers=6)
>>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay])
>>> benchmark.run()
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base 8 8 0.005
bert-base 8 32 0.008
bert-base 8 128 0.022
bert-base 8 512 0.106
bert-384-hid 8 8 0.005
bert-384-hid 8 32 0.007
bert-384-hid 8 128 0.018
bert-384-hid 8 512 0.064
bert-6-lay 8 8 0.002
bert-6-lay 8 32 0.003
bert-6-lay 8 128 0.0011
bert-6-lay 8 512 0.074
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base 8 8 1330
bert-base 8 32 1330
bert-base 8 128 1330
bert-base 8 512 1770
bert-384-hid 8 8 1330
bert-384-hid 8 32 1330
bert-384-hid 8 128 1330
bert-384-hid 8 512 1540
bert-6-lay 8 8 1330
bert-6-lay 8 32 1330
bert-6-lay 8 128 1330
bert-6-lay 8 512 1540
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 2.11.0
- framework: Tensorflow
- use_xla: False
- framework_version: 2.2.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-06-29
- time: 09:38:15.487125
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
Again, inference time and required memory for inference are measured, but this time for customized configurations of the BertModel class. This feature can especially be helpful when deciding for which configuration the model should be trained.
Benchmark best practices
This section lists a couple of best practices one should be aware of when benchmarking a model.
Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user specifies on which device the code should be run by setting the CUDA_VISIBLE_DEVICES environment variable in the shell, e.g. export CUDA_VISIBLE_DEVICES=0 before running the code.
The option no_multi_processing should only be set to True for testing and debugging. To ensure accurate memory measurement it is recommended to run each memory benchmark in a separate process by making sure no_multi_processing is set to True.
One should always state the environment information when sharing the results of a model benchmark. Results can vary heavily between different GPU devices, library versions, etc., so that benchmark results on their own are not very useful for the community.
Sharing your benchmark
Previously all available core models (10 at the time) have been benchmarked for inference time, across many different settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were done across CPUs (except for TensorFlow XLA) and GPUs.
The approach is detailed in the following blogpost and the results are available here.
With the new benchmark tools, it is easier than ever to share your benchmark results with the community
PyTorch Benchmarking Results.
TensorFlow Benchmarking Results. |
https://huggingface.co/docs/transformers/chat_templating | Templates for Chat Models
Introduction
An increasingly common use case for LLMs is chat. In a chat context, rather than continuing a single string of text (as is the case with a standard language model), the model instead continues a conversation that consists of one or more messages, each of which includes a role as well as message text.
Most commonly, these roles are “user” for messages sent by the user, and “assistant” for messages sent by the model. Some models also support a “system” role. System messages are usually sent at the beginning of the conversation and include directives about how the model should behave in the subsequent chat.
All language models, including models fine-tuned for chat, operate on linear sequences of tokens and do not intrinsically have special handling for roles. This means that role information is usually injected by adding control tokens between messages, to indicate both the message boundary and the relevant roles.
Unfortunately, there isn’t (yet!) a standard for which tokens to use, and so different models have been trained with wildly different formatting and control tokens for chat. This can be a real problem for users - if you use the wrong format, then the model will be confused by your input, and your performance will be a lot worse than it should be. This is the problem that chat templates aim to resolve.
Chat conversations are typically represented as a list of dictionaries, where each dictionary contains role and content keys, and represents a single chat message. Chat templates are strings containing a Jinja template that specifies how to format a conversation for a given model into a single tokenizable sequence. By storing this information with the tokenizer, we can ensure that models get input data in the format they expect.
Let’s make this concrete with a quick example using the BlenderBot model. BlenderBot has an extremely simple default template, which mostly just adds whitespace between rounds of dialogue:
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
" Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!</s>"
Notice how the entire chat is condensed into a single string. If we use tokenize=True, which is the default setting, that string will also be tokenized for us. To see a more complex template in action, though, let’s use the meta-llama/Llama-2-7b-chat-hf model. Note that this model has gated access, so you will have to request access on the repo if you want to run this code yourself:
>> from transformers import AutoTokenizer
>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ]
>> tokenizer.use_default_system_prompt = False
>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>[INST] Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST]"
Note that this time, the tokenizer has added the control tokens [INST] and [/INST] to indicate the start and end of user messages (but not assistant messages!)
How do chat templates work?
The chat template for a model is stored on the tokenizer.chat_template attribute. If no chat template is set, the default template for that model class is used instead. Let’s take a look at the template for BlenderBot:
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> tokenizer.default_chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
That’s kind of intimidating. Let’s add some newlines and indentation to make it more readable. Note that we remove the first newline after each block as well as any preceding whitespace before a block by default, using the Jinja trim_blocks and lstrip_blocks flags. This means that you can write your templates with indentations and newlines and still have them function correctly!
{% for message in messages %} {% if message['role'] == 'user' %} {{ ' ' }} {% endif %} {{ message['content'] }} {% if not loop.last %} {{ ' ' }} {% endif %} {% endfor %} {{ eos_token }}
If you’ve never seen one of these before, this is a Jinja template. Jinja is a templating language that allows you to write simple code that generates text. In many ways, the code and syntax resembles Python. In pure Python, this template would look something like this:
for idx, message in enumerate(messages):
if message['role'] == 'user':
print(' ')
print(message['content'])
if not idx == len(messages) - 1:
print(' ')
print(eos_token)
Effectively, the template does three things:
For each message, if the message is a user message, add a blank space before it, otherwise print nothing.
Add the message content
If the message is not the last message, add two spaces after it. After the final message, print the EOS token.
This is a pretty simple template - it doesn’t add any control tokens, and it doesn’t support “system” messages, which are a common way to give the model directives about how it should behave in the subsequent conversation. But Jinja gives you a lot of flexibility to do those things! Let’s see a Jinja template that can format inputs similarly to the way LLaMA formats them (note that the real LLaMA template includes handling for default system messages and slightly different system message handling in general - don’t use this one in your actual code!)
{% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ ' ' + message['content'] + ' ' + eos_token }} {% endif %} {% endfor %}
Hopefully if you stare at this for a little bit you can see what this template is doing - it adds specific tokens based on the “role” of each message, which represents who sent it. User, assistant and system messages are clearly distinguishable to the model because of the tokens they’re wrapped in.
How do I create a chat template?
Simple, just write a jinja template and set tokenizer.chat_template. You may find it easier to start with an existing template from another model and simply edit it for your needs! For example, we could take the LLaMA template above and add ”[ASST]” and ”[/ASST]” to assistant messages:
{% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'].strip() + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }} {% endif %} {% endfor %}
Now, simply set the tokenizer.chat_template attribute. Next time you use apply_chat_template(), it will use your new template! This attribute will be saved in the tokenizer_config.json file, so you can use push_to_hub() to upload your new template to the Hub and make sure everyone’s using the right template for your model!
template = tokenizer.chat_template
template = template.replace("SYS", "SYSTEM")
tokenizer.chat_template = template
tokenizer.push_to_hub("model_name")
The method apply_chat_template() which uses your chat template is called by the ConversationalPipeline class, so once you set the correct chat template, your model will automatically become compatible with ConversationalPipeline.
What are "default" templates?
Before the introduction of chat templates, chat handling was hardcoded at the model class level. For backwards compatibility, we have retained this class-specific handling as default templates, also set at the class level. If a model does not have a chat template set, but there is a default template for its model class, the ConversationalPipeline class and methods like apply_chat_template will use the class template instead. You can find out what the default template for your tokenizer is by checking the tokenizer.default_chat_template attribute.
This is something we do purely for backward compatibility reasons, to avoid breaking any existing workflows. Even when the class template is appropriate for your model, we strongly recommend overriding the default template by setting the chat_template attribute explicitly to make it clear to users that your model has been correctly configured for chat, and to future-proof in case the default templates are ever altered or deprecated.
What template should I use?
When setting the template for a model that’s already been trained for chat, you should ensure that the template exactly matches the message formatting that the model saw during training, or else you will probably experience performance degradation. This is true even if you’re training the model further - you will probably get the best performance if you keep the chat tokens constant. This is very analogous to tokenization - you generally get the best performance for inference or fine-tuning when you precisely match the tokenization used during training.
If you’re training a model from scratch, or fine-tuning a base language model for chat, on the other hand, you have a lot of freedom to choose an appropriate template! LLMs are smart enough to learn to handle lots of different input formats. Our default template for models that don’t have a class-specific template follows the ChatML format, and this is a good, flexible choice for many use-cases. It looks like this:
{% for message in messages %} {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}} {% endfor %}
If you like this one, here it is in one-liner form, ready to copy into your code:
tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}"
This template wraps each message in <|im_start|> and <|im_end|> tokens, and simply writes the role as a string, which allows for flexibility in the roles you train with. The output looks like this:
<|im_start|>system
You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|> <|im_start|>user
How are you?<|im_end|> <|im_start|>assistant
I'm doing great!<|im_end|>
The “user”, “system” and “assistant” roles are the standard for chat, and we recommend using them when it makes sense, particularly if you want your model to operate well with ConversationalPipeline. However, you are not limited to these roles - templating is extremely flexible, and any string can be a role.
I want to use chat templates! How should I get started?
If you have any chat models, you should set their tokenizer.chat_template attribute and test it using apply_chat_template(). This applies even if you’re not the model owner - if you’re using a model with an empty chat template, or one that’s still using the default class template, please open a pull request to the model repository so that this attribute can be set properly!
Once the attribute is set, that’s it, you’re done! tokenizer.apply_chat_template will now work correctly for that model, which means it is also automatically supported in places like ConversationalPipeline!
By ensuring that models have this attribute, we can make sure that the whole community gets to use the full power of open-source models. Formatting mismatches have been haunting the field and silently harming performance for too long - it’s time to put an end to them! |
https://huggingface.co/docs/transformers/create_a_model | Create a custom architecture
An AutoClass automatically infers the model architecture and downloads pretrained configuration and weights. Generally, we recommend using an AutoClass to produce checkpoint-agnostic code. But users who want more control over specific model parameters can create a custom 🤗 Transformers model from just a few base classes. This could be particularly useful for anyone who is interested in studying, training or experimenting with a 🤗 Transformers model. In this guide, dive deeper into creating a custom model without an AutoClass. Learn how to:
Load and customize a model configuration.
Create a model architecture.
Create a slow and fast tokenizer for text.
Create an image processor for vision tasks.
Create a feature extractor for audio tasks.
Create a processor for multimodal tasks.
Configuration
A configuration refers to a model’s specific attributes. Each model configuration has different attributes; for instance, all NLP models have the hidden_size, num_attention_heads, num_hidden_layers and vocab_size attributes in common. These attributes specify the number of attention heads or hidden layers to construct a model with.
Get a closer look at DistilBERT by accessing DistilBertConfig to inspect it’s attributes:
>>> from transformers import DistilBertConfig
>>> config = DistilBertConfig()
>>> print(config)
DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.16.2",
"vocab_size": 30522
}
DistilBertConfig displays all the default attributes used to build a base DistilBertModel. All attributes are customizable, creating space for experimentation. For example, you can customize a default model to:
Try a different activation function with the activation parameter.
Use a higher dropout ratio for the attention probabilities with the attention_dropout parameter.
>>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4)
>>> print(my_config)
DistilBertConfig {
"activation": "relu",
"attention_dropout": 0.4,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.16.2",
"vocab_size": 30522
}
Pretrained model attributes can be modified in the from_pretrained() function:
>>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4)
Once you are satisfied with your model configuration, you can save it with save_pretrained(). Your configuration file is stored as a JSON file in the specified save directory:
>>> my_config.save_pretrained(save_directory="./your_model_save_path")
To reuse the configuration file, load it with from_pretrained():
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json")
You can also save your configuration file as a dictionary or even just the difference between your custom configuration attributes and the default configuration attributes! See the configuration documentation for more details.
Model
The next step is to create a model. The model - also loosely referred to as the architecture - defines what each layer is doing and what operations are happening. Attributes like num_hidden_layers from the configuration are used to define the architecture. Every model shares the base class PreTrainedModel and a few common methods like resizing input embeddings and pruning self-attention heads. In addition, all models are also either a torch.nn.Module, tf.keras.Model or flax.linen.Module subclass. This means models are compatible with each of their respective framework’s usage.
Pytorch
Hide Pytorch content
Load your custom configuration attributes into the model:
>>> from transformers import DistilBertModel
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json")
>>> model = DistilBertModel(my_config)
This creates a model with random values instead of pretrained weights. You won’t be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training.
Create a pretrained model with from_pretrained():
>>> model = DistilBertModel.from_pretrained("distilbert-base-uncased")
When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you’d like:
>>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config)
TensorFlow
Hide TensorFlow content
Load your custom configuration attributes into the model:
>>> from transformers import TFDistilBertModel
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json")
>>> tf_model = TFDistilBertModel(my_config)
This creates a model with random values instead of pretrained weights. You won’t be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training.
Create a pretrained model with from_pretrained():
>>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you’d like:
>>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config)
Model heads
At this point, you have a base DistilBERT model which outputs the hidden states. The hidden states are passed as inputs to a model head to produce the final output. 🤗 Transformers provides a different model head for each task as long as a model supports the task (i.e., you can’t use DistilBERT for a sequence-to-sequence task like translation).
Pytorch
Hide Pytorch content
For example, DistilBertForSequenceClassification is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs.
>>> from transformers import DistilBertForSequenceClassification
>>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the DistilBertForQuestionAnswering model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output.
>>> from transformers import DistilBertForQuestionAnswering
>>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
TensorFlow
Hide TensorFlow content
For example, TFDistilBertForSequenceClassification is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs.
>>> from transformers import TFDistilBertForSequenceClassification
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the TFDistilBertForQuestionAnswering model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output.
>>> from transformers import TFDistilBertForQuestionAnswering
>>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
Tokenizer
The last base class you need before using a model for textual data is a tokenizer to convert raw text to tensors. There are two types of tokenizers you can use with 🤗 Transformers:
PreTrainedTokenizer: a Python implementation of a tokenizer.
PreTrainedTokenizerFast: a tokenizer from our Rust-based 🤗 Tokenizer library. This tokenizer type is significantly faster - especially during batch tokenization - due to its Rust implementation. The fast tokenizer also offers additional methods like offset mapping which maps tokens to their original words or characters.
Both tokenizers support common methods such as encoding and decoding, adding new tokens, and managing special tokens.
Not every model supports a fast tokenizer. Take a look at this table to check if a model has fast tokenizer support.
If you trained your own tokenizer, you can create one from your vocabulary file:
>>> from transformers import DistilBertTokenizer
>>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left")
It is important to remember the vocabulary from a custom tokenizer will be different from the vocabulary generated by a pretrained model’s tokenizer. You need to use a pretrained model’s vocabulary if you are using a pretrained model, otherwise the inputs won’t make sense. Create a tokenizer with a pretrained model’s vocabulary with the DistilBertTokenizer class:
>>> from transformers import DistilBertTokenizer
>>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
Create a fast tokenizer with the DistilBertTokenizerFast class:
>>> from transformers import DistilBertTokenizerFast
>>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased")
By default, AutoTokenizer will try to load a fast tokenizer. You can disable this behavior by setting use_fast=False in from_pretrained.
Image Processor
An image processor processes vision inputs. It inherits from the base ImageProcessingMixin class.
To use, create an image processor associated with the model you’re using. For example, create a default ViTImageProcessor if you are using ViT for image classification:
>>> from transformers import ViTImageProcessor
>>> vit_extractor = ViTImageProcessor()
>>> print(vit_extractor)
ViTImageProcessor {
"do_normalize": true,
"do_resize": true,
"image_processor_type": "ViTImageProcessor",
"image_mean": [
0.5,
0.5,
0.5
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": 2,
"size": 224
}
If you aren’t looking for any customization, just use the from_pretrained method to load a model’s default image processor parameters.
Modify any of the ViTImageProcessor parameters to create your custom image processor:
>>> from transformers import ViTImageProcessor
>>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3])
>>> print(my_vit_extractor)
ViTImageProcessor {
"do_normalize": false,
"do_resize": true,
"image_processor_type": "ViTImageProcessor",
"image_mean": [
0.3,
0.3,
0.3
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": "PIL.Image.BOX",
"size": 224
}
Feature Extractor
A feature extractor processes audio inputs. It inherits from the base FeatureExtractionMixin class, and may also inherit from the SequenceFeatureExtractor class for processing audio inputs.
To use, create a feature extractor associated with the model you’re using. For example, create a default Wav2Vec2FeatureExtractor if you are using Wav2Vec2 for audio classification:
>>> from transformers import Wav2Vec2FeatureExtractor
>>> w2v2_extractor = Wav2Vec2FeatureExtractor()
>>> print(w2v2_extractor)
Wav2Vec2FeatureExtractor {
"do_normalize": true,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 16000
}
If you aren’t looking for any customization, just use the from_pretrained method to load a model’s default feature extractor parameters.
Modify any of the Wav2Vec2FeatureExtractor parameters to create your custom feature extractor:
>>> from transformers import Wav2Vec2FeatureExtractor
>>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False)
>>> print(w2v2_extractor)
Wav2Vec2FeatureExtractor {
"do_normalize": false,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 8000
}
Processor
For models that support multimodal tasks, 🤗 Transformers offers a processor class that conveniently wraps processing classes such as a feature extractor and a tokenizer into a single object. For example, let’s use the Wav2Vec2Processor for an automatic speech recognition task (ASR). ASR transcribes audio to text, so you will need a feature extractor and a tokenizer.
Create a feature extractor to handle the audio inputs:
>>> from transformers import Wav2Vec2FeatureExtractor
>>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True)
Create a tokenizer to handle the text inputs:
>>> from transformers import Wav2Vec2CTCTokenizer
>>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt")
Combine the feature extractor and tokenizer in Wav2Vec2Processor:
>>> from transformers import Wav2Vec2Processor
>>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
With two basic classes - configuration and model - and an additional preprocessing class (tokenizer, image processor, feature extractor, or processor), you can create any of the models supported by 🤗 Transformers. Each of these base classes are configurable, allowing you to use the specific attributes you want. You can easily setup a model for training or modify an existing pretrained model to fine-tune. |
https://huggingface.co/docs/transformers/tflite | Export to TFLite
TensorFlow Lite is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the .tflite file extension.
🤗 Optimum offers functionality to export 🤗 Transformers models to TFLite through the exporters.tflite module. For the list of supported model architectures, please refer to 🤗 Optimum documentation.
To export a model to TFLite, install the required dependencies:
pip install optimum[exporters-tf]
To check out all available arguments, refer to the 🤗 Optimum docs, or view help in command line:
optimum-cli export tflite --help
To export a model’s checkpoint from the 🤗 Hub, for example, bert-base-uncased, run the following command:
optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/
You should see the logs indicating progress and showing where the resulting model.tflite is saved, like this:
Validating TFLite model...
-[✓] TFLite model output names match reference model (logits)
- Validating TFLite Model output "logits":
-[✓] (1, 128, 30522) matches (1, 128, 30522)
-[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05)
The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05:
- logits: max diff = 5.817413330078125e-05.
The exported model was saved at: bert_tflite
The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model’s weights and tokenizer files in the same directory (local_path). When using CLI, pass the local_path to the model argument instead of the checkpoint name on 🤗 Hub. |
https://huggingface.co/docs/transformers/notebooks | 🤗 Transformers Notebooks
You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging 🤗 Transformers and would like to be listed here, please open a Pull Request so it can be included under the Community notebooks.
Hugging Face's notebooks 🤗
Documentation notebooks
You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them:
Notebook Description
Quicktour of the library A presentation of the various APIs in Transformers
Summary of the tasks How to run the models of the Transformers library task by task
Preprocessing data How to use a tokenizer to preprocess your data
Fine-tuning a pretrained model How to use the Trainer to fine-tune a pretrained model
Summary of the tokenizers The differences between the tokenizers algorithm
Multilingual models How to use the multilingual models of the library
PyTorch Examples
Natural Language Processing
Notebook Description
Train your tokenizer How to train and use your very own tokenizer
Train your language model How to easily start using transformers
How to fine-tune a model on text classification Show how to preprocess the data and fine-tune a pretrained model on any GLUE task.
How to fine-tune a model on language modeling Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task.
How to fine-tune a model on token classification Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS).
How to fine-tune a model on question answering Show how to preprocess the data and fine-tune a pretrained model on SQUAD.
How to fine-tune a model on multiple choice Show how to preprocess the data and fine-tune a pretrained model on SWAG.
How to fine-tune a model on translation Show how to preprocess the data and fine-tune a pretrained model on WMT.
How to fine-tune a model on summarization Show how to preprocess the data and fine-tune a pretrained model on XSUM.
How to train a language model from scratch Highlight all the steps to effectively train Transformer model on custom data
How to generate text How to use different decoding methods for language generation with transformers
How to generate text (with constraints) How to guide language generation with user-provided constraints
Reformer How Reformer pushes the limits of language modeling
Computer Vision
Notebook Description
How to fine-tune a model on image classification (Torchvision) Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification
How to fine-tune a model on image classification (Albumentations) Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image Classification
How to fine-tune a model on image classification (Kornia) Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image Classification
How to perform zero-shot object detection with OWL-ViT Show how to perform zero-shot object detection on images with text queries
How to fine-tune an image captioning model Show how to fine-tune BLIP for image captioning on a custom dataset
How to build an image similarity system with Transformers Show how to build an image similarity system
How to fine-tune a SegFormer model on semantic segmentation Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation
How to fine-tune a VideoMAE model on video classification Show how to preprocess the data and fine-tune a pretrained VideoMAE model on Video Classification
Audio
Notebook Description
How to fine-tune a speech recognition model in English Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT
How to fine-tune a speech recognition model in any language Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice
How to fine-tune a model on audio classification Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting
Biological Sequences
Notebook Description
How to fine-tune a pre-trained protein model See how to tokenize proteins and fine-tune a large pre-trained protein “language” model
How to generate protein folds See how to go from protein sequence to a full protein model and PDB file
How to fine-tune a Nucleotide Transformer model See how to tokenize DNA and fine-tune a large pre-trained DNA “language” model
Fine-tune a Nucleotide Transformer model with LoRA Train even larger DNA models in a memory-efficient way
Other modalities
Notebook Description
Probabilistic Time Series Forecasting See how to train Time Series Transformer on a custom dataset
Utility notebooks
Notebook Description
How to export model to ONNX Highlight how to export and run inference workloads through ONNX
How to use Benchmarks How to benchmark models with transformers
TensorFlow Examples
Natural Language Processing
Notebook Description
Train your tokenizer How to train and use your very own tokenizer
Train your language model How to easily start using transformers
How to fine-tune a model on text classification Show how to preprocess the data and fine-tune a pretrained model on any GLUE task.
How to fine-tune a model on language modeling Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task.
How to fine-tune a model on token classification Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS).
How to fine-tune a model on question answering Show how to preprocess the data and fine-tune a pretrained model on SQUAD.
How to fine-tune a model on multiple choice Show how to preprocess the data and fine-tune a pretrained model on SWAG.
How to fine-tune a model on translation Show how to preprocess the data and fine-tune a pretrained model on WMT.
How to fine-tune a model on summarization Show how to preprocess the data and fine-tune a pretrained model on XSUM.
Computer Vision
Notebook Description
How to fine-tune a model on image classification Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification
How to fine-tune a SegFormer model on semantic segmentation Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation
Biological Sequences
Notebook Description
How to fine-tune a pre-trained protein model See how to tokenize proteins and fine-tune a large pre-trained protein “language” model
Utility notebooks
Notebook Description
How to train TF/Keras models on TPU See how to train at high speed on Google’s TPU hardware
Optimum notebooks
🤗 Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.
Notebook Description
How to quantize a model with ONNX Runtime for text classification Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task.
How to quantize a model with Intel Neural Compressor for text classification Show how to apply static, dynamic and aware training quantization on a model using Intel Neural Compressor (INC) for any GLUE task.
How to fine-tune a model on text classification with ONNX Runtime Show how to preprocess the data and fine-tune a model on any GLUE task using ONNX Runtime.
How to fine-tune a model on summarization with ONNX Runtime Show how to preprocess the data and fine-tune a model on XSUM using ONNX Runtime.
Community notebooks:
More notebooks developed by the community are available here. |
https://huggingface.co/docs/transformers/torchscript | Export to TorchScript
This is the very beginning of our experiments with TorchScript and we are still exploring its capabilities with variable-input-size models. It is a focus of interest to us and we will deepen our analysis in upcoming releases, with more code examples, a more flexible implementation, and benchmarks comparing Python-based codes with compiled TorchScript.
According to the TorchScript documentation:
TorchScript is a way to create serializable and optimizable models from PyTorch code.
There are two PyTorch modules, JIT and TRACE, that allow developers to export their models to be reused in other programs like efficiency-oriented C++ programs.
We provide an interface that allows you to export 🤗 Transformers models to TorchScript so they can be reused in a different environment than PyTorch-based Python programs. Here, we explain how to export and use our models using TorchScript.
Exporting a model requires two things:
model instantiation with the torchscript flag
a forward pass with dummy inputs
These necessities imply several things developers should be careful about as detailed below.
TorchScript flag and tied weights
The torchscript flag is necessary because most of the 🤗 Transformers language models have tied weights between their Embedding layer and their Decoding layer. TorchScript does not allow you to export models that have tied weights, so it is necessary to untie and clone the weights beforehand.
Models instantiated with the torchscript flag have their Embedding layer and Decoding layer separated, which means that they should not be trained down the line. Training would desynchronize the two layers, leading to unexpected results.
This is not the case for models that do not have a language model head, as those do not have tied weights. These models can be safely exported without the torchscript flag.
Dummy inputs and standard lengths
The dummy inputs are used for a models forward pass. While the inputs’ values are propagated through the layers, PyTorch keeps track of the different operations executed on each tensor. These recorded operations are then used to create the trace of the model.
The trace is created relative to the inputs’ dimensions. It is therefore constrained by the dimensions of the dummy input, and will not work for any other sequence length or batch size. When trying with a different size, the following error is raised:
`The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2`
We recommended you trace the model with a dummy input size at least as large as the largest input that will be fed to the model during inference. Padding can help fill the missing values. However, since the model is traced with a larger input size, the dimensions of the matrix will also be large, resulting in more calculations.
Be careful of the total number of operations done on each input and follow the performance closely when exporting varying sequence-length models.
Using TorchScript in Python
This section demonstrates how to save and load models as well as how to use the trace for inference.
Saving a model
To export a BertModel with TorchScript, instantiate BertModel from the BertConfig class and then save it to disk under the filename traced_bert.pt:
from transformers import BertModel, BertTokenizer, BertConfig
import torch
enc = BertTokenizer.from_pretrained("bert-base-uncased")
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
masked_index = 8
tokenized_text[masked_index] = "[MASK]"
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
config = BertConfig(
vocab_size_or_config_json_file=32000,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
torchscript=True,
)
model = BertModel(config)
model.eval()
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt")
Loading a model
Now you can load the previously saved BertModel, traced_bert.pt, from disk and use it on the previously initialised dummy_input:
loaded_model = torch.jit.load("traced_bert.pt")
loaded_model.eval()
all_encoder_layers, pooled_output = loaded_model(*dummy_input)
Using a traced model for inference
Use the traced model for inference by using its __call__ dunder method:
traced_model(tokens_tensor, segments_tensors)
Deploy Hugging Face TorchScript models to AWS with the Neuron SDK
AWS introduced the Amazon EC2 Inf1 instance family for low cost, high performance machine learning inference in the cloud. The Inf1 instances are powered by the AWS Inferentia chip, a custom-built hardware accelerator, specializing in deep learning inferencing workloads. AWS Neuron is the SDK for Inferentia that supports tracing and optimizing transformers models for deployment on Inf1. The Neuron SDK provides:
Easy-to-use API with one line of code change to trace and optimize a TorchScript model for inference in the cloud.
Out of the box performance optimizations for improved cost-performance.
Support for Hugging Face transformers models built with either PyTorch or TensorFlow.
Implications
Transformers models based on the BERT (Bidirectional Encoder Representations from Transformers) architecture, or its variants such as distilBERT and roBERTa run best on Inf1 for non-generative tasks such as extractive question answering, sequence classification, and token classification. However, text generation tasks can still be adapted to run on Inf1 according to this AWS Neuron MarianMT tutorial. More information about models that can be converted out of the box on Inferentia can be found in the Model Architecture Fit section of the Neuron documentation.
Dependencies
Using AWS Neuron to convert models requires a Neuron SDK environment which comes preconfigured on AWS Deep Learning AMI.
Converting a model for AWS Neuron
Convert a model for AWS NEURON using the same code from Using TorchScript in Python to trace a BertModel. Import the torch.neuron framework extension to access the components of the Neuron SDK through a Python API:
from transformers import BertModel, BertTokenizer, BertConfig
import torch
import torch.neuron
You only need to modify the following line:
- torch.jit.trace(model, [tokens_tensor, segments_tensors])
+ torch.neuron.trace(model, [token_tensor, segments_tensors])
This enables the Neuron SDK to trace the model and optimize it for Inf1 instances.
To learn more about AWS Neuron SDK features, tools, example tutorials and latest updates, please see the AWS NeuronSDK documentation. |
https://huggingface.co/docs/transformers/community | Fine-tune a pre-trained Transformer to generate lyrics How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model Aleksey Korshuk Train T5 in Tensorflow 2 How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD Muhammad Harris Train T5 on TPU How to train T5 on SQUAD with Transformers and Nlp Suraj Patil Fine-tune T5 for Classification and Multiple Choice How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning Suraj Patil Fine-tune DialoGPT on New Datasets and Languages How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots Nathan Cooper Long Sequence Modeling with Reformer How to train on sequences as long as 500,000 tokens with Reformer Patrick von Platen Fine-tune BART for Summarization How to fine-tune BART for summarization with fastai using blurr Wayde Gilliam Fine-tune a pre-trained Transformer on anyone’s tweets How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model Boris Dayma Optimize 🤗 Hugging Face models with Weights & Biases A complete tutorial showcasing W&B integration with Hugging Face Boris Dayma Pretrain Longformer How to build a “long” version of existing pretrained models Iz Beltagy Fine-tune Longformer for QA How to fine-tune longformer model for QA task Suraj Patil Evaluate Model with 🤗nlp How to evaluate longformer on TriviaQA with nlp Patrick von Platen Fine-tune T5 for Sentiment Span Extraction How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning Lorenzo Ampil Fine-tune DistilBert for Multiclass Classification How to fine-tune DistilBert for multiclass classification with PyTorch Abhishek Kumar Mishra Fine-tune BERT for Multi-label Classification How to fine-tune BERT for multi-label classification using PyTorch Abhishek Kumar Mishra Fine-tune T5 for Summarization How to fine-tune T5 for summarization in PyTorch and track experiments with WandB Abhishek Kumar Mishra Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing Michael Benesty Pretrain Reformer for Masked Language Modeling How to train a Reformer model with bi-directional self-attention layers Patrick von Platen Expand and Fine Tune Sci-BERT How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. Tanmay Thakur Fine Tune BlenderBotSmall for Summarization using the Trainer API How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API. Tanmay Thakur Fine-tune Electra and interpret with Integrated Gradients How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients Eliza Szczechla fine-tune a non-English GPT-2 Model with Trainer class How to fine-tune a non-English GPT-2 Model with Trainer class Philipp Schmid Fine-tune a DistilBERT Model for Multi Label Classification task How to fine-tune a DistilBERT Model for Multi Label Classification task Dhaval Taunk Fine-tune ALBERT for sentence-pair classification How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task Nadir El Manouzi Fine-tune Roberta for sentiment analysis How to fine-tune a Roberta model for sentiment analysis Dhaval Taunk Evaluating Question Generation Models How accurate are the answers to questions generated by your seq2seq transformer model? Pascal Zoleko Classify text with DistilBERT and Tensorflow How to fine-tune DistilBERT for text classification in TensorFlow Peter Bayerle Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail How to warm-start a EncoderDecoderModel with a bert-base-uncased checkpoint for summarization on CNN/Dailymail Patrick von Platen Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum How to warm-start a shared EncoderDecoderModel with a roberta-base checkpoint for summarization on BBC/XSum Patrick von Platen Fine-tune TAPAS on Sequential Question Answering (SQA) How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) dataset Niels Rogge Evaluate TAPAS on Table Fact Checking (TabFact) How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🤗 datasets and 🤗 transformers libraries Niels Rogge Fine-tuning mBART for translation How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation Vasudev Gupta Fine-tune LayoutLM on FUNSD (a form understanding dataset) How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documents Niels Rogge Fine-Tune DistilGPT2 and Generate Text How to fine-tune DistilGPT2 and generate text Aakash Tripathi Fine-Tune LED on up to 8K tokens How to fine-tune LED on pubmed for long-range summarization Patrick von Platen Evaluate LED on Arxiv How to effectively evaluate LED on long-range summarization Patrick von Platen Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset) How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classification Niels Rogge Wav2Vec2 CTC decoding with GPT2 adjustment How to decode CTC sequence with language model adjustment Eric Lam Fine-tune BART for summarization in two languages with Trainer class How to fine-tune BART for summarization in two languages with Trainer class Eliza Szczechla Evaluate Big Bird on Trivia QA How to evaluate BigBird on long document question answering on Trivia QA Patrick von Platen Create video captions using Wav2Vec2 How to create YouTube captions from any video by transcribing the audio with Wav2Vec Niklas Muennighoff Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning Niels Rogge Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 Trainer How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 Trainer Niels Rogge Evaluate LUKE on Open Entity, an entity typing dataset How to evaluate LukeForEntityClassification on the Open Entity dataset Ikuya Yamada Evaluate LUKE on TACRED, a relation extraction dataset How to evaluate LukeForEntityPairClassification on the TACRED dataset Ikuya Yamada Evaluate LUKE on CoNLL-2003, an important NER benchmark How to evaluate LukeForEntitySpanClassification on the CoNLL-2003 dataset Ikuya Yamada Evaluate BigBird-Pegasus on PubMed dataset How to evaluate BigBirdPegasusForConditionalGeneration on PubMed dataset Vasudev Gupta Speech Emotion Classification with Wav2Vec2 How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset Mehrdad Farahani Detect objects in an image with DETR How to use a trained DetrForObjectDetection model to detect objects in an image and visualize attention Niels Rogge Fine-tune DETR on a custom object detection dataset How to fine-tune DetrForObjectDetection on a custom object detection dataset Niels Rogge Finetune T5 for Named Entity Recognition How to fine-tune T5 on a Named Entity Recognition Task Ogundepo Odunayo |
https://huggingface.co/docs/transformers/main_classes/processors | Processors
Processors can mean two different things in the Transformers library:
the objects that pre-process inputs for multi-modal models such as Wav2Vec2 (speech and text) or CLIP (text and vision)
deprecated objects that were used in older versions of the library to preprocess data for GLUE or SQUAD.
Multi-modal processors
Any multi-modal model will require an object to encode or decode the data that groups several modalities (among text, vision and audio). This is handled by objects called processors, which group together two or more processing objects such as tokenizers (for the text modality), image processors (for vision) and feature extractors (for audio).
Those processors inherit from the following base class that implements the saving and loading functionality:
class transformers.ProcessorMixin
< source >
( *args **kwargs )
This is a mixin used to provide saving/loading functionality for all processor classes.
from_pretrained
< source >
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json. **kwargs — Additional keyword arguments passed along to both from_pretrained() and ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor from_pretrained(), image processor ImageProcessingMixin and the tokenizer ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the methods above for more information.
push_to_hub
< source >
( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs )
Parameters
repo_id (str) — The name of the repository you want to push your processor to. It should contain your organization name when pushing to a given organization.
use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
commit_message (str, optional) — Message to commit while pushing. Will default to "Upload processor".
private (bool, optional) — Whether or not the repository created should be private.
token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB").
create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization.
revision (str, optional) — Branch to push the uploaded files to.
Upload the processor files to the 🤗 Model Hub.
Examples:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("bert-base-cased")
processor.push_to_hub("my-finetuned-bert")
processor.push_to_hub("huggingface/my-finetuned-bert")
register_for_auto_class
< source >
( auto_class = 'AutoProcessor' )
Parameters
auto_class (str or type, optional, defaults to "AutoProcessor") — The auto class to register this new feature extractor with.
Register this class with a given auto class. This should only be used for custom feature extractors as the ones in the library are already mapped with AutoProcessor.
This API is experimental and may have some slight breaking changes in the next releases.
save_pretrained
< source >
( save_directory push_to_hub: bool = False **kwargs )
Parameters
save_directory (str or os.PathLike) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist).
push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and save_pretrained(). Please refer to the docstrings of the methods above for more information.
Deprecated processors
All processors follow the same architecture which is that of the DataProcessor. The processor returns a list of InputExample. These InputExample can be converted to InputFeatures in order to be fed to the model.
Base class for data converters for sequence classification data sets.
get_example_from_tensor_dict
< source >
( tensor_dict )
Gets an example from a dict with tensorflow tensors.
Gets the list of labels for this data set.
Some tensorflow_datasets datasets are not formatted the same way the GLUE datasets are. This method converts examples to the correct format.
class transformers.InputExample
< source >
( guid: str text_a: str text_b: typing.Optional[str] = None label: typing.Optional[str] = None )
A single training/test example for simple sequence classification.
Serializes this instance to a JSON string.
class transformers.InputFeatures
< source >
( input_ids: typing.List[int] attention_mask: typing.Optional[typing.List[int]] = None token_type_ids: typing.Optional[typing.List[int]] = None label: typing.Union[int, float, NoneType] = None )
A single set of features of data. Property names are the same names as the corresponding inputs to a model.
Serializes this instance to a JSON string.
GLUE
General Language Understanding Evaluation (GLUE) is a benchmark that evaluates the performance of models across a diverse set of existing NLU tasks. It was released together with the paper GLUE: A multi-task benchmark and analysis platform for natural language understanding
This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB, QQP, QNLI, RTE and WNLI.
Those processors are:
~data.processors.utils.MrpcProcessor
~data.processors.utils.MnliProcessor
~data.processors.utils.MnliMismatchedProcessor
~data.processors.utils.Sst2Processor
~data.processors.utils.StsbProcessor
~data.processors.utils.QqpProcessor
~data.processors.utils.QnliProcessor
~data.processors.utils.RteProcessor
~data.processors.utils.WnliProcessor
Additionally, the following method can be used to load values from a data file and convert them to a list of InputExample.
transformers.glue_convert_examples_to_features
< source >
( examples: typing.Union[typing.List[transformers.data.processors.utils.InputExample], ForwardRef('tf.data.Dataset')] tokenizer: PreTrainedTokenizer max_length: typing.Optional[int] = None task = None label_list = None output_mode = None )
Loads a data file into a list of InputFeatures
XNLI
The Cross-Lingual NLI Corpus (XNLI) is a benchmark that evaluates the quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on MultiNLI: pairs of text are labeled with textual entailment annotations for 15 different languages (including both high-resource language such as English and low-resource languages such as Swahili).
It was released together with the paper XNLI: Evaluating Cross-lingual Sentence Representations
This library hosts the processor to load the XNLI data:
~data.processors.utils.XnliProcessor
Please note that since the gold labels are available on the test set, evaluation is performed on the test set.
An example using these processors is given in the run_xnli.py script.
SQuAD
The Stanford Question Answering Dataset (SQuAD) is a benchmark that evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version (v1.1) was released together with the paper SQuAD: 100,000+ Questions for Machine Comprehension of Text. The second version (v2.0) was released alongside the paper Know What You Don’t Know: Unanswerable Questions for SQuAD.
This library hosts a processor for each of the two versions:
Processors
Those processors are:
~data.processors.utils.SquadV1Processor
~data.processors.utils.SquadV2Processor
They both inherit from the abstract class ~data.processors.utils.SquadProcessor
class transformers.data.processors.squad.SquadProcessor
< source >
( )
Processor for the SQuAD data set. overridden by SquadV1Processor and SquadV2Processor, used by the version 1.1 and version 2.0 of SQuAD, respectively.
get_dev_examples
< source >
( data_dir filename = None )
Returns the evaluation example from the data directory.
get_examples_from_dataset
< source >
( dataset evaluate = False )
Creates a list of SquadExample using a TFDS dataset.
Examples:
>>> import tensorflow_datasets as tfds
>>> dataset = tfds.load("squad")
>>> training_examples = get_examples_from_dataset(dataset, evaluate=False)
>>> evaluation_examples = get_examples_from_dataset(dataset, evaluate=True)
get_train_examples
< source >
( data_dir filename = None )
Returns the training examples from the data directory.
Additionally, the following method can be used to convert SQuAD examples into ~data.processors.utils.SquadFeatures that can be used as model inputs.
transformers.squad_convert_examples_to_features
< source >
( examples tokenizer max_seq_length doc_stride max_query_length is_training padding_strategy = 'max_length' return_dataset = False threads = 1 tqdm_enabled = True )
Converts a list of examples into a list of features that can be directly given as input to a model. It is model-dependant and takes advantage of many of the tokenizer’s features to create the model’s inputs.
Example:
processor = SquadV2Processor()
examples = processor.get_dev_examples(data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
)
These processors as well as the aforementioned method can be used with files containing the data as well as with the tensorflow_datasets package. Examples are given below.
Example usage
Here is an example using the processors as well as the conversion method using data files:
processor = SquadV2Processor()
examples = processor.get_dev_examples(squad_v2_data_dir)
processor = SquadV1Processor()
examples = processor.get_dev_examples(squad_v1_data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
Using tensorflow_datasets is as easy as using a data file:
tfds_examples = tfds.load("squad")
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
Another example using these processors is given in the run_squad.py script. |
https://huggingface.co/docs/transformers/main_classes/quantization | Quantize 🤗 Transformers models
AutoGPTQ Integration
🤗 Transformers has integrated optimum API to perform GPTQ quantization on language models. You can load and quantize your model in 8, 4, 3 or even 2 bits without a big drop of performance and faster inference speed! This is supported by most GPU hardwares.
To learn more about the the quantization model, check out:
the GPTQ paper
the optimum guide on GPTQ quantization
the AutoGPTQ library used as the backend
Requirements
You need to have the following requirements installed to run the code below:
Install latest AutoGPTQ library pip install auto-gptq
Install latest optimum from source pip install git+https://github.com/huggingface/optimum.git
Install latest transformers from source pip install git+https://github.com/huggingface/transformers.git
Install latest accelerate library pip install --upgrade accelerate
Note that GPTQ integration supports for now only text models and you may encounter unexpected behaviour for vision, speech or multi-modal models.
Load and quantize a model
GPTQ is a quantization method that requires weights calibration before using the quantized models. If you want to quantize transformers model from scratch, it might take some time before producing the quantized model (~5 min on a Google colab for facebook/opt-350m model).
Hence, there are two different scenarios where you want to use GPTQ-quantized models. The first use case would be to load models that has been already quantized by other users that are available on the Hub, the second use case would be to quantize your model from scratch and save it or push it on the Hub so that other users can also use it.
GPTQ Configuration
In order to load and quantize a model, you need to create a GPTQConfig. You need to pass the number of bits, a dataset in order to calibrate the quantization and the tokenizer of the model in order prepare the dataset.
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
Note that you can pass your own dataset as a list of string. However, it is highly recommended to use the dataset from the GPTQ paper.
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer)
Quantization
You can quantize a model by using from_pretrained and setting the quantization_config.
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config)
Note that you will need a GPU to quantize a model. We will put the model in the cpu and move the modules back and forth to the gpu in order to quantize them.
If you want to maximize your gpus usage while using cpu offload, you can set device_map = "auto".
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)
Note that disk offload is not supported. Furthermore, if you are out of memory because of the dataset, you may have to pass max_memory in from_pretained. Checkout this guide to learn more about device_map and max_memory.
GPTQ quantization only works for text model for now. Futhermore, the quantization process can a lot of time depending on one's hardware (175B model = 4 gpu hours using NVIDIA A100). Please check on the hub if there is not a GPTQ quantized version of the model. If not, you can submit a demand on github.
Push quantized model to 🤗 Hub
You can push the quantized model like any 🤗 model to Hub with push_to_hub. The quantization config will be saved and pushed along the model.
quantized_model.push_to_hub("opt-125m-gptq")
tokenizer.push_to_hub("opt-125m-gptq")
If you want to save your quantized model on your local machine, you can also do it with save_pretrained:
quantized_model.save_pretrained("opt-125m-gptq")
tokenizer.save_pretrained("opt-125m-gptq")
Note that if you have quantized your model with a device_map, make sure to move the entire model to one of your gpus or the cpu before saving it.
quantized_model.to("cpu")
quantized_model.save_pretrained("opt-125m-gptq")
Load a quantized model from the 🤗 Hub
You can load a quantized model from the Hub by using from_pretrained. Make sure that the pushed weights are quantized, by checking that the attribute quantization_config is present in the model configuration object.
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq")
If you want to load a model faster and without allocating more memory than needed, the device_map argument also works with quantized model. Make sure that you have accelerate library installed.
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto")
Exllama kernels for faster inference
For 4-bit model, you can use the exllama kernels in order to a faster inference speed. It is activated by default. You can change that behavior by passing disable_exllama in GPTQConfig. This will overwrite the quantization config stored in the config. Note that you will only be able to overwrite the attributes related to the kernels. Furthermore, you need to have the entire model on gpus if you want to use exllama kernels.
import torch
gptq_config = GPTQConfig(bits=4, disable_exllama=False)
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config = gptq_config)
Note that only 4-bit models are supported for now. Furthermore, it is recommended to deactivate the exllama kernels if you are finetuning a quantized model with peft.
Fine-tune a quantized model
With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been quantized with GPTQ. Please have a look at peft library for more details.
Example demo
Check out the Google Colab notebook to learn how to quantize your model with GPTQ and how finetune the quantized model with peft.
GPTQConfig
class transformers.GPTQConfig
< source >
( bits: int tokenizer: typing.Any = None dataset: typing.Union[str, typing.List[str], NoneType] = None group_size: int = 128 damp_percent: float = 0.1 desc_act: bool = False sym: bool = True true_sequential: bool = True use_cuda_fp16: bool = False model_seqlen: typing.Optional[int] = None block_name_to_quantize: typing.Optional[str] = None module_name_preceding_first_block: typing.Optional[typing.List[str]] = None batch_size: int = 1 pad_token_id: typing.Optional[int] = None disable_exllama: bool = False max_input_length: typing.Optional[int] = None **kwargs )
Parameters
bits (int) — The number of bits to quantize to, supported numbers are (2, 3, 4, 8).
tokenizer (str or PreTrainedTokenizerBase, optional) — The tokenizer used to process the dataset. You can pass either:
A custom tokenizer object.
A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.
dataset (Union[List[str]], optional) — The dataset used for quantization. You can provide your own dataset in a list of string or just use the original datasets used in GPTQ paper [‘wikitext2’,‘c4’,‘c4-new’,‘ptb’,‘ptb-new’]
group_size (int, optional, defaults to 128) — The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
damp_percent (float, optional, defaults to 0.1) — The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1.
desc_act (bool, optional, defaults to False) — Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. Also known as act-order.
sym (bool, optional, defaults to True) — Whether to use symetric quantization.
true_sequential (bool, optional, defaults to True) — Whether to perform sequential quantization even within a single Transformer block. Instead of quantizing the entire block at once, we perform layer-wise quantization. As a result, each layer undergoes quantization using inputs that have passed through the previously quantized layers.
use_cuda_fp16 (bool, optional, defaults to False) — Whether or not to use optimized cuda kernel for fp16 model. Need to have model in fp16.
model_seqlen (int, optional) — The maximum sequence length that the model can take.
block_name_to_quantize (str, optional) — The transformers block name to quantize.
module_name_preceding_first_block (List[str], optional) — The layers that are preceding the first Transformer block.
batch_size (int, optional, defaults to 1) — The batch size used when processing the dataset
pad_token_id (int, optional) — The pad token id. Needed to prepare the dataset when batch_size > 1.
disable_exllama (bool, optional, defaults to False) — Whether to use exllama backend. Only works with bits = 4.
max_input_length (int, optional) — The maximum input length. This is needed to initialize a buffer that depends on the maximum expected input length. It is specific to the exllama backend with act-order.
This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using optimum api for gptq quantization relying on auto_gptq backend.
Safety checker that arguments are correct
bitsandbytes Integration
🤗 Transformers is closely integrated with most used modules on bitsandbytes. You can load your model in 8-bit precision with few lines of code. This is supported by most of the GPU hardwares since the 0.37.0 release of bitsandbytes.
Learn more about the quantization method in the LLM.int8() paper, or the blogpost about the collaboration.
Since its 0.39.0 release, you can load any model that supports device_map using 4-bit quantization, leveraging FP4 data type.
If you want to quantize your own pytorch model, check out this documentation from 🤗 Accelerate library.
Here are the things you can do using bitsandbytes integration
General usage
You can quantize a model by using the load_in_8bit or load_in_4bit argument when calling the from_pretrained() method as long as your model supports loading with 🤗 Accelerate and contains torch.nn.Linear layers. This should work for any modality as well.
from transformers import AutoModelForCausalLM
model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True)
model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True)
By default all other modules (e.g. torch.nn.LayerNorm) will be converted in torch.float16, but if you want to change their dtype you can overwrite the torch_dtype argument:
>>> import torch
>>> from transformers import AutoModelForCausalLM
>>> model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32)
>>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype
torch.float32
FP4 quantization
Requirements
Make sure that you have installed the requirements below before running any of the code snippets below.
Latest bitsandbytes library pip install bitsandbytes>=0.39.0
Install latest accelerate pip install --upgrade accelerate
Install latest transformers pip install --upgrade transformers
Tips and best practices
Advanced usage: Refer to this Google Colab notebook for advanced usage of 4-bit quantization with all the possible options.
Faster inference with batch_size=1 : Since the 0.40.0 release of bitsandbytes, for batch_size=1 you can benefit from fast inference. Check out these release notes and make sure to have a version that is greater than 0.40.0 to benefit from this feature out of the box.
Training: According to QLoRA paper, for training 4-bit base models (e.g. using LoRA adapters) one should use bnb_4bit_quant_type='nf4'.
Inference: For inference, bnb_4bit_quant_type does not have a huge impact on the performance. However for consistency with the model’s weights, make sure you use the same bnb_4bit_compute_dtype and torch_dtype arguments.
Load a large model in 4bit
By using load_in_4bit=True when calling the .from_pretrained method, you can divide your memory use by 4 (roughly).
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True)
Note that once a model has been loaded in 4-bit it is currently not possible to push the quantized weights on the Hub. Note also that you cannot train 4-bit weights as this is not supported yet. However you can use 4-bit models to train extra parameters, this will be covered in the next section.
Load a large model in 8bit
You can load a model by roughly halving the memory requirements by using load_in_8bit=True argument when calling .from_pretrained method
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True)
Then, use your model as you would usually use a PreTrainedModel.
You can check the memory footprint of your model with get_memory_footprint method.
print(model.get_memory_footprint())
With this integration we were able to load large models on smaller devices and run them without any issue.
Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub except if you use the latest transformers and bitsandbytes. Note also that you cannot train 8-bit weights as this is not supported yet. However you can use 8-bit models to train extra parameters, this will be covered in the next section. Note also that device_map is optional but setting device_map = 'auto' is prefered for inference as it will dispatch efficiently the model on the available ressources.
Advanced use cases
Here we will cover some advanced use cases you can perform with FP4 quantization
Change the compute dtype
The compute dtype is used to change the dtype that will be used during computation. For example, hidden states could be in float32 but computation can be set to bf16 for speedups. By default, the compute dtype is set to float32.
import torch
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
Using NF4 (Normal Float 4) data type
You can also use the NF4 data type, which is a new 4bit datatype adapted for weights that have been initialized using a normal distribution. For that run:
from transformers import BitsAndBytesConfig
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
)
model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config)
Use nested quantization for more memory efficient inference
We also advise users to use the nested quantization technique. This saves more memory at no additional performance - from our empirical observations, this enables fine-tuning llama-13b model on an NVIDIA-T4 16GB with a sequence length of 1024, batch size of 1 and gradient accumulation steps of 4.
from transformers import BitsAndBytesConfig
double_quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
)
model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config)
Push quantized models on the 🤗 Hub
You can push a quantized model on the Hub by naively using push_to_hub method. This will first push the quantization configuration file, then push the quantized model weights. Make sure to use bitsandbytes>0.37.2 (at this time of writing, we tested it on bitsandbytes==0.38.0.post1) to be able to use this feature.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model.push_to_hub("bloom-560m-8bit")
Pushing 8bit models on the Hub is strongely encouraged for large models. This will allow the community to benefit from the memory footprint reduction and loading for example large models on a Google Colab.
Load a quantized model from the 🤗 Hub
You can load a quantized model from the Hub by using from_pretrained method. Make sure that the pushed weights are quantized, by checking that the attribute quantization_config is present in the model configuration object.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto")
Note that in this case, you don’t need to specify the arguments load_in_8bit=True, but you need to make sure that bitsandbytes and accelerate are installed. Note also that device_map is optional but setting device_map = 'auto' is prefered for inference as it will dispatch efficiently the model on the available ressources.
Advanced use cases
This section is intended to advanced users, that want to explore what it is possible to do beyond loading and running 8-bit models.
Offload between cpu and gpu
One of the advanced use case of this is being able to load a model and dispatch the weights between CPU and GPU. Note that the weights that will be dispatched on CPU will not be converted in 8-bit, thus kept in float32. This feature is intended for users that want to fit a very large model and dispatch the model between GPU and CPU.
First, load a BitsAndBytesConfig from transformers and set the attribute llm_int8_enable_fp32_cpu_offload to True:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
Let’s say you want to load bigscience/bloom-1b7 model, and you have just enough GPU RAM to fit the entire model except the lm_head. Therefore write a custom device_map as follows:
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": "cpu",
"transformer.h": 0,
"transformer.ln_f": 0,
}
And load your model as follows:
model_8bit = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom-1b7",
device_map=device_map,
quantization_config=quantization_config,
)
And that’s it! Enjoy your model!
Play with llm_int8_threshold
You can play with the llm_int8_threshold argument to change the threshold of the outliers. An “outlier” is a hidden state value that is greater than a certain threshold. This corresponds to the outlier threshold for outlier detection as described in LLM.int8() paper. Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning). This argument can impact the inference speed of the model. We suggest to play with this parameter to find which one is the best for your use case.
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_threshold=10,
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Skip the conversion of some modules
Some models has several modules that needs to be not converted in 8-bit to ensure stability. For example Jukebox model has several lm_head modules that should be skipped. Play with llm_int8_skip_modules
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_skip_modules=["lm_head"],
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Fine-tune a model that has been loaded in 8-bit
With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been loaded in 8-bit. This enables fine-tuning large models such as flan-t5-large or facebook/opt-6.7b in a single google Colab. Please have a look at peft library for more details.
Note that you don’t need to pass device_map when loading the model for training. It will automatically load your model on your GPU. You can also set the device map to a specific device if needed (e.g. cuda:0, 0, torch.device('cuda:0')). Please note that device_map=auto should be used for inference only.
BitsAndBytesConfig
class transformers.BitsAndBytesConfig
< source >
( load_in_8bit = False load_in_4bit = False llm_int8_threshold = 6.0 llm_int8_skip_modules = None llm_int8_enable_fp32_cpu_offload = False llm_int8_has_fp16_weight = False bnb_4bit_compute_dtype = None bnb_4bit_quant_type = 'fp4' bnb_4bit_use_double_quant = False **kwargs )
Parameters
load_in_8bit (bool, optional, defaults to False) — This flag is used to enable 8-bit quantization with LLM.int8().
load_in_4bit (bool, optional, defaults to False) — This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers from bitsandbytes.
llm_int8_threshold (float, optional, defaults to 6) — This corresponds to the outlier threshold for outlier detection as described in LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale paper: https://arxiv.org/abs/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).
llm_int8_skip_modules (List[str], optional) — An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example for CausalLM models, the last lm_head is kept in its original dtype.
llm_int8_enable_fp32_cpu_offload (bool, optional, defaults to False) — This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such as google/flan-t5-xxl. Note that the int8 operations will not be run on CPU.
llm_int8_has_fp16_weight (bool, optional, defaults to False) — This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass.
bnb_4bit_compute_dtype (torch.dtype or str, optional, defaults to torch.float32) — This sets the computational type which might be different than the input time. For example, inputs might be fp32, but computation can be set to bf16 for speedups.
bnb_4bit_quant_type (str, {fp4, nf4}, defaults to fp4) — This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified by fp4 or nf4.
bnb_4bit_use_double_quant (bool, optional, defaults to False) — This flag is used for nested quantization where the quantization constants from the first quantization are quantized again.
kwargs (Dict[str, Any], optional) — Additional parameters from which to initialize the configuration object.
This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using bitsandbytes.
This replaces load_in_8bit or load_in_4bittherefore both options are mutually exclusive.
Currently only supports LLM.int8(), FP4, and NF4 quantization. If more methods are added to bitsandbytes, then more arguments will be added to this class.
Returns True if the model is quantizable, False otherwise.
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
This method returns the quantization method used for the model. If the model is not quantizable, it returns None.
to_diff_dict
< source >
( ) → Dict[str, Any]
Dictionary of all the attributes that make up this configuration instance,
Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.
Quantization with 🤗 optimum
Please have a look at Optimum documentation to learn more about quantization methods that are supported by optimum and see if these are applicable for your use case. |
https://huggingface.co/docs/transformers/internal/pipelines_utils | Utilities for pipelines
This page lists all the utility functions the library provides for pipelines.
Most of those are only useful if you are studying the code of the models in the library.
Argument handling
class transformers.pipelines.ArgumentHandler
< source >
( )
Base interface for handling arguments for each Pipeline.
class transformers.pipelines.ZeroShotClassificationArgumentHandler
< source >
( )
Handles arguments for zero-shot for text classification by turning each possible label into an NLI premise/hypothesis pair.
class transformers.pipelines.QuestionAnsweringArgumentHandler
< source >
( )
QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to internal SquadExample.
QuestionAnsweringArgumentHandler manages all the possible to create a SquadExample from the command-line supplied arguments.
Data format
class transformers.PipelineDataFormat
< source >
( output_path: typing.Optional[str] input_path: typing.Optional[str] column: typing.Optional[str] overwrite: bool = False )
Parameters
output_path (str, optional) — Where to save the outgoing data.
input_path (str, optional) — Where to look for the input data.
column (str, optional) — The column to read.
overwrite (bool, optional, defaults to False) — Whether or not to overwrite the output_path.
Base class for all the pipeline supported data format both for reading and writing. Supported data formats currently includes:
JSON
CSV
stdin/stdout (pipe)
PipelineDataFormat also includes some utilities to work with multi-columns like mapping from datasets columns to pipelines keyword arguments through the dataset_kwarg_1=dataset_column_1 format.
from_str
< source >
( format: str output_path: typing.Optional[str] input_path: typing.Optional[str] column: typing.Optional[str] overwrite = False ) → PipelineDataFormat
Parameters
format (str) — The format of the desired pipeline. Acceptable values are "json", "csv" or "pipe".
output_path (str, optional) — Where to save the outgoing data.
input_path (str, optional) — Where to look for the input data.
column (str, optional) — The column to read.
overwrite (bool, optional, defaults to False) — Whether or not to overwrite the output_path.
The proper data format.
Creates an instance of the right subclass of PipelineDataFormat depending on format.
save
< source >
( data: typing.Union[dict, typing.List[dict]] )
Parameters
data (dict or list of dict) — The data to store.
Save the provided data object with the representation for the current PipelineDataFormat.
save_binary
< source >
( data: typing.Union[dict, typing.List[dict]] ) → str
Parameters
data (dict or list of dict) — The data to store.
Path where the data has been saved.
Save the provided data object as a pickle-formatted binary data on the disk.
class transformers.CsvPipelineDataFormat
< source >
( output_path: typing.Optional[str] input_path: typing.Optional[str] column: typing.Optional[str] overwrite = False )
Parameters
output_path (str, optional) — Where to save the outgoing data.
input_path (str, optional) — Where to look for the input data.
column (str, optional) — The column to read.
overwrite (bool, optional, defaults to False) — Whether or not to overwrite the output_path.
Support for pipelines using CSV data format.
save
< source >
( data: typing.List[dict] )
Parameters
data (List[dict]) — The data to store.
Save the provided data object with the representation for the current PipelineDataFormat.
class transformers.JsonPipelineDataFormat
< source >
( output_path: typing.Optional[str] input_path: typing.Optional[str] column: typing.Optional[str] overwrite = False )
Parameters
output_path (str, optional) — Where to save the outgoing data.
input_path (str, optional) — Where to look for the input data.
column (str, optional) — The column to read.
overwrite (bool, optional, defaults to False) — Whether or not to overwrite the output_path.
Support for pipelines using JSON file format.
save
< source >
( data: dict )
Parameters
data (dict) — The data to store.
Save the provided data object in a json file.
class transformers.PipedPipelineDataFormat
< source >
( output_path: typing.Optional[str] input_path: typing.Optional[str] column: typing.Optional[str] overwrite: bool = False )
Parameters
output_path (str, optional) — Where to save the outgoing data.
input_path (str, optional) — Where to look for the input data.
column (str, optional) — The column to read.
overwrite (bool, optional, defaults to False) — Whether or not to overwrite the output_path.
Read data from piped input to the python process. For multi columns data, columns should separated by
If columns are provided, then the output will be a dictionary with {column_x: value_x}
save
< source >
( data: dict )
Parameters
data (dict) — The data to store.
Print the data.
Utilities
class transformers.pipelines.PipelineException
< source >
( task: str model: str reason: str )
Parameters
task (str) — The task of the pipeline.
model (str) — The model used by the pipeline.
reason (str) — The error message to display.
Raised by a Pipeline when handling call. |
https://huggingface.co/docs/transformers/internal/modeling_utils | Custom Layers and Utilities
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
Most of those are only useful if you are studying the code of the models in the library.
Pytorch custom modules
class transformers.Conv1D
< source >
( nf nx )
Parameters
nf (int) — The number of output features.
nx (int) — The number of input features.
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
class transformers.modeling_utils.PoolerStartLogits
< source >
( config: PretrainedConfig )
Parameters
config (PretrainedConfig) — The config used by the model, will be used to grab the hidden_size of the model.
Compute SQuAD start logits from sequence hidden states.
forward
< source >
( hidden_states: FloatTensor p_mask: typing.Optional[torch.FloatTensor] = None ) → torch.FloatTensor
Parameters
hidden_states (torch.FloatTensor of shape (batch_size, seq_len, hidden_size)) — The final hidden states of the model.
p_mask (torch.FloatTensor of shape (batch_size, seq_len), optional) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked.
Returns
torch.FloatTensor
The start logits for SQuAD.
class transformers.modeling_utils.PoolerEndLogits
< source >
( config: PretrainedConfig )
Parameters
config (PretrainedConfig) — The config used by the model, will be used to grab the hidden_size of the model and the layer_norm_eps to use.
Compute SQuAD end logits from sequence hidden states.
forward
< source >
( hidden_states: FloatTensor start_states: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None p_mask: typing.Optional[torch.FloatTensor] = None ) → torch.FloatTensor
Parameters
hidden_states (torch.FloatTensor of shape (batch_size, seq_len, hidden_size)) — The final hidden states of the model.
start_states (torch.FloatTensor of shape (batch_size, seq_len, hidden_size), optional) — The hidden states of the first tokens for the labeled span.
start_positions (torch.LongTensor of shape (batch_size,), optional) — The position of the first token for the labeled span.
p_mask (torch.FloatTensor of shape (batch_size, seq_len), optional) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked.
Returns
torch.FloatTensor
The end logits for SQuAD.
One of start_states or start_positions should be not None. If both are set, start_positions overrides start_states.
class transformers.modeling_utils.PoolerAnswerClass
< source >
( config )
Parameters
config (PretrainedConfig) — The config used by the model, will be used to grab the hidden_size of the model.
Compute SQuAD 2.0 answer class from classification and start tokens hidden states.
forward
< source >
( hidden_states: FloatTensor start_states: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None cls_index: typing.Optional[torch.LongTensor] = None ) → torch.FloatTensor
Parameters
hidden_states (torch.FloatTensor of shape (batch_size, seq_len, hidden_size)) — The final hidden states of the model.
start_states (torch.FloatTensor of shape (batch_size, seq_len, hidden_size), optional) — The hidden states of the first tokens for the labeled span.
start_positions (torch.LongTensor of shape (batch_size,), optional) — The position of the first token for the labeled span.
cls_index (torch.LongTensor of shape (batch_size,), optional) — Position of the CLS token for each sentence in the batch. If None, takes the last token.
Returns
torch.FloatTensor
The SQuAD 2.0 answer class.
One of start_states or start_positions should be not None. If both are set, start_positions overrides start_states.
class transformers.modeling_utils.SquadHeadOutput
< source >
( loss: typing.Optional[torch.FloatTensor] = None start_top_log_probs: typing.Optional[torch.FloatTensor] = None start_top_index: typing.Optional[torch.LongTensor] = None end_top_log_probs: typing.Optional[torch.FloatTensor] = None end_top_index: typing.Optional[torch.LongTensor] = None cls_logits: typing.Optional[torch.FloatTensor] = None )
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.
start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the is_impossible label of the answers.
Base class for outputs of question answering models using a SQuADHead.
class transformers.modeling_utils.SQuADHead
< source >
( config )
Parameters
config (PretrainedConfig) — The config used by the model, will be used to grab the hidden_size of the model and the layer_norm_eps to use.
A SQuAD head inspired by XLNet.
forward
< source >
( hidden_states: FloatTensor start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None cls_index: typing.Optional[torch.LongTensor] = None is_impossible: typing.Optional[torch.LongTensor] = None p_mask: typing.Optional[torch.FloatTensor] = None return_dict: bool = False ) → transformers.modeling_utils.SquadHeadOutput or tuple(torch.FloatTensor)
Parameters
hidden_states (torch.FloatTensor of shape (batch_size, seq_len, hidden_size)) — Final hidden states of the model on the sequence tokens.
start_positions (torch.LongTensor of shape (batch_size,), optional) — Positions of the first token for the labeled span.
end_positions (torch.LongTensor of shape (batch_size,), optional) — Positions of the last token for the labeled span.
cls_index (torch.LongTensor of shape (batch_size,), optional) — Position of the CLS token for each sentence in the batch. If None, takes the last token.
is_impossible (torch.LongTensor of shape (batch_size,), optional) — Whether the question has a possible answer in the paragraph or not.
p_mask (torch.FloatTensor of shape (batch_size, seq_len), optional) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked.
return_dict (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_utils.SquadHeadOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.configuration_utils.PretrainedConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses.
start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the is_impossible label of the answers.
class transformers.modeling_utils.SequenceSummary
< source >
( config: PretrainedConfig )
Parameters
config (PretrainedConfig) — The config used by the model. Relevant arguments in the config class of the model are (refer to the actual config class of your model for the default values it uses):
summary_type (str) — The method to use to make this summary. Accepted values are:
"last" — Take the last token hidden state (like XLNet)
"first" — Take the first token hidden state (like Bert)
"mean" — Take the mean of all tokens hidden states
"cls_index" — Supply a Tensor of classification token position (GPT/GPT-2)
"attn" — Not implemented now, use multi-head attention
summary_use_proj (bool) — Add a projection after the vector extraction.
summary_proj_to_labels (bool) — If True, the projection outputs to config.num_labels classes (otherwise to config.hidden_size).
summary_activation (Optional[str]) — Set to "tanh" to add a tanh activation to the output, another string or None will add no activation.
summary_first_dropout (float) — Optional dropout probability before the projection and activation.
summary_last_dropout (float)— Optional dropout probability after the projection and activation.
Compute a single vector summary of a sequence hidden states.
forward
< source >
( hidden_states: FloatTensor cls_index: typing.Optional[torch.LongTensor] = None ) → torch.FloatTensor
Parameters
hidden_states (torch.FloatTensor of shape [batch_size, seq_len, hidden_size]) — The hidden states of the last layer.
cls_index (torch.LongTensor of shape [batch_size] or [batch_size, ...] where … are optional leading dimensions of hidden_states, optional) — Used if summary_type == "cls_index" and takes the last token of the sequence as classification token.
Returns
torch.FloatTensor
The summary of the sequence hidden states.
Compute a single vector summary of a sequence hidden states.
PyTorch Helper Functions
transformers.apply_chunking_to_forward
< source >
( forward_fn: typing.Callable[..., torch.Tensor] chunk_size: int chunk_dim: int *input_tensors ) → torch.Tensor
Parameters
forward_fn (Callable[..., torch.Tensor]) — The forward function of the model.
chunk_size (int) — The chunk size of a chunked tensor: num_chunks = len(input_tensors[0]) / chunk_size.
chunk_dim (int) — The dimension over which the input_tensors should be chunked.
input_tensors (Tuple[torch.Tensor]) — The input tensors of forward_fn which will be chunked
A tensor with the same shape as the forward_fn would have given if applied`.
This function chunks the input_tensors into smaller input tensor parts of size chunk_size over the dimension chunk_dim. It then applies a layer forward_fn to each chunk independently to save memory.
If the forward_fn is independent across the chunk_dim this function will yield the same result as directly applying forward_fn to input_tensors.
Examples:
def forward_chunk(self, hidden_states):
hidden_states = self.decoder(hidden_states)
return hidden_states
def forward(self, hidden_states):
return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states)
transformers.pytorch_utils.find_pruneable_heads_and_indices
< source >
( heads: typing.List[int] n_heads: int head_size: int already_pruned_heads: typing.Set[int] ) → Tuple[Set[int], torch.LongTensor]
Parameters
heads (List[int]) — List of the indices of heads to prune.
n_heads (int) — The number of heads in the model.
head_size (int) — The size of each head.
already_pruned_heads (Set[int]) — A set of already pruned heads.
Returns
Tuple[Set[int], torch.LongTensor]
A tuple with the indices of heads to prune taking already_pruned_heads into account and the indices of rows/columns to keep in the layer weight.
Finds the heads and their indices taking already_pruned_heads into account.
transformers.prune_layer
< source >
( layer: typing.Union[torch.nn.modules.linear.Linear, transformers.pytorch_utils.Conv1D] index: LongTensor dim: typing.Optional[int] = None ) → torch.nn.Linear or Conv1D
Parameters
layer (Union[torch.nn.Linear, Conv1D]) — The layer to prune.
index (torch.LongTensor) — The indices to keep in the layer.
dim (int, optional) — The dimension on which to keep the indices.
Returns
torch.nn.Linear or Conv1D
The pruned layer as a new layer with requires_grad=True.
Prune a Conv1D or linear layer to keep only entries in index.
Used to remove heads.
transformers.pytorch_utils.prune_conv1d_layer
< source >
( layer: Conv1D index: LongTensor dim: int = 1 ) → Conv1D
Parameters
layer (Conv1D) — The layer to prune.
index (torch.LongTensor) — The indices to keep in the layer.
dim (int, optional, defaults to 1) — The dimension on which to keep the indices.
The pruned layer as a new layer with requires_grad=True.
Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed.
Used to remove heads.
transformers.pytorch_utils.prune_linear_layer
< source >
( layer: Linear index: LongTensor dim: int = 0 ) → torch.nn.Linear
Parameters
layer (torch.nn.Linear) — The layer to prune.
index (torch.LongTensor) — The indices to keep in the layer.
dim (int, optional, defaults to 0) — The dimension on which to keep the indices.
The pruned layer as a new layer with requires_grad=True.
Prune a linear layer to keep only entries in index.
Used to remove heads.
TensorFlow custom layers
class transformers.modeling_tf_utils.TFConv1D
< source >
( *args **kwargs )
Parameters
nf (int) — The number of output features.
nx (int) — The number of input features.
initializer_range (float, optional, defaults to 0.02) — The standard deviation to use to initialize the weights.
kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to the __init__ of tf.keras.layers.Layer.
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
class transformers.TFSequenceSummary
< source >
( *args **kwargs )
Parameters
config (PretrainedConfig) — The config used by the model. Relevant arguments in the config class of the model are (refer to the actual config class of your model for the default values it uses):
summary_type (str) — The method to use to make this summary. Accepted values are:
"last" — Take the last token hidden state (like XLNet)
"first" — Take the first token hidden state (like Bert)
"mean" — Take the mean of all tokens hidden states
"cls_index" — Supply a Tensor of classification token position (GPT/GPT-2)
"attn" — Not implemented now, use multi-head attention
summary_use_proj (bool) — Add a projection after the vector extraction.
summary_proj_to_labels (bool) — If True, the projection outputs to config.num_labels classes (otherwise to config.hidden_size).
summary_activation (Optional[str]) — Set to "tanh" to add a tanh activation to the output, another string or None will add no activation.
summary_first_dropout (float) — Optional dropout probability before the projection and activation.
summary_last_dropout (float)— Optional dropout probability after the projection and activation.
initializer_range (float, defaults to 0.02) — The standard deviation to use to initialize the weights.
kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to the __init__ of tf.keras.layers.Layer.
Compute a single vector summary of a sequence hidden states.
TensorFlow loss functions
class transformers.modeling_tf_utils.TFCausalLanguageModelingLoss
< source >
( )
Loss function suitable for causal language modeling (CLM), that is, the task of guessing the next token.
Any label of -100 will be ignored (along with the corresponding logits) in the loss computation.
class transformers.modeling_tf_utils.TFMaskedLanguageModelingLoss
< source >
( )
Loss function suitable for masked language modeling (MLM), that is, the task of guessing the masked tokens.
Any label of -100 will be ignored (along with the corresponding logits) in the loss computation.
class transformers.modeling_tf_utils.TFMultipleChoiceLoss
< source >
( )
Loss function suitable for multiple choice tasks.
class transformers.modeling_tf_utils.TFQuestionAnsweringLoss
< source >
( )
Loss function suitable for question answering.
class transformers.modeling_tf_utils.TFSequenceClassificationLoss
< source >
( )
Loss function suitable for sequence classification.
class transformers.modeling_tf_utils.TFTokenClassificationLoss
< source >
( )
Loss function suitable for token classification.
Any label of -100 will be ignored (along with the corresponding logits) in the loss computation.
TensorFlow Helper Functions
transformers.modeling_tf_utils.get_initializer
< source >
( initializer_range: float = 0.02 ) → tf.keras.initializers.TruncatedNormal
Parameters
initializer_range (float, defaults to 0.02) — Standard deviation of the initializer range.
Returns
tf.keras.initializers.TruncatedNormal
The truncated normal initializer.
Creates a tf.keras.initializers.TruncatedNormal with the given range.
transformers.modeling_tf_utils.keras_serializable
< source >
( )
Parameters
cls (a tf.keras.layers.Layers subclass) — Typically a TF.MainLayer class in this project, in general must accept a config argument to its initializer.
Decorate a Keras Layer class to support Keras serialization.
This is done by:
Adding a transformers_config dict to the Keras config dictionary in get_config (called by Keras at serialization time.
Wrapping __init__ to accept that transformers_config dict (passed by Keras at deserialization time) and convert it to a config object for the actual layer initializer.
Registering the class as a custom object in Keras (if the Tensorflow version supports this), so that it does not need to be supplied in custom_objects in the call to tf.keras.models.load_model.
transformers.shape_list
< source >
( tensor: typing.Union[tensorflow.python.framework.ops.Tensor, numpy.ndarray] ) → List[int]
Parameters
tensor (tf.Tensor or np.ndarray) — The tensor we want the shape of.
The shape of the tensor as a list.
Deal with dynamic shape in tensorflow cleanly. |
https://huggingface.co/docs/transformers/main_classes/feature_extractor | Feature Extractor
A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction from sequences, e.g., pre-processing audio files to Log-Mel Spectrogram features, feature extraction from images e.g. cropping image image files, but also padding, normalization, and conversion to Numpy, PyTorch, and TensorFlow tensors.
FeatureExtractionMixin
This is a feature extraction mixin used to provide saving/loading functionality for sequential and image feature extractors.
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json.
cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.
force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist.
resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
Instantiate a type of FeatureExtractionMixin from a feature extractor, e.g. a derived class of SequenceFeatureExtractor.
Examples:
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(
"facebook/wav2vec2-base-960h"
)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(
"./test/saved_model/"
)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("./test/saved_model/preprocessor_config.json")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(
"facebook/wav2vec2-base-960h", return_attention_mask=False, foo=False
)
assert feature_extractor.return_attention_mask is False
feature_extractor, unused_kwargs = Wav2Vec2FeatureExtractor.from_pretrained(
"facebook/wav2vec2-base-960h", return_attention_mask=False, foo=False, return_unused_kwargs=True
)
assert feature_extractor.return_attention_mask is False
assert unused_kwargs == {"foo": False}
( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs )
Parameters
save_directory (str or os.PathLike) — Directory where the feature extractor JSON file will be saved (will be created if it does not exist).
push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method.
Save a feature_extractor object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method.
SequenceFeatureExtractor
( feature_size: int sampling_rate: int padding_value: float **kwargs )
Parameters
feature_size (int) — The feature dimension of the extracted features.
sampling_rate (int) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (float) — The value that is used to fill the padding values / vectors.
This is a general feature extraction class for speech recognition.
( processed_features: typing.Union[transformers.feature_extraction_utils.BatchFeature, typing.List[transformers.feature_extraction_utils.BatchFeature], typing.Dict[str, transformers.feature_extraction_utils.BatchFeature], typing.Dict[str, typing.List[transformers.feature_extraction_utils.BatchFeature]], typing.List[typing.Dict[str, transformers.feature_extraction_utils.BatchFeature]]] padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None truncation: bool = False pad_to_multiple_of: typing.Optional[int] = None return_attention_mask: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None )
Parameters
processed_features (BatchFeature, list of BatchFeature, Dict[str, List[float]], Dict[str, List[List[float]] or List[Dict[str, List[float]]]) — Processed inputs. Can represent one input (BatchFeature or Dict[str, List[float]]) or a batch of input values / vectors (list of BatchFeature, Dict[str, List[List[float]]] or List[Dict[str, List[float]]]) so you can use this method during preprocessing as well as in a PyTorch Dataloader collate function.
Instead of List[float] you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors), see the note above for the return type.
padding (bool, str or PaddingStrategy, optional, defaults to True) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above).
truncation (bool) — Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor’s default.
What are attention masks?
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Pad input values / input vectors or a batch of input values / input vectors up to predefined length or to the max sequence length in the batch.
Padding side (left/right) padding values are defined at the feature extractor level (with self.padding_side, self.padding_value)
If the processed_features passed are dictionary of numpy arrays, PyTorch tensors or TensorFlow tensors, the result will use the same type unless you provide a different tensor type with return_tensors. In the case of PyTorch tensors, you will lose the specific device of your tensors however.
BatchFeature
class transformers.BatchFeature
< source >
( data: typing.Union[typing.Dict[str, typing.Any], NoneType] = None tensor_type: typing.Union[NoneType, str, transformers.utils.generic.TensorType] = None )
Parameters
data (dict) — Dictionary of lists/arrays/tensors returned by the call/pad methods (‘input_values’, ‘attention_mask’, etc.).
tensor_type (Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
Holds the output of the pad() and feature extractor specific __call__ methods.
This class is derived from a python dictionary and can be used as a dictionary.
convert_to_tensors
< source >
( tensor_type: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None )
Parameters
tensor_type (str or TensorType, optional) — The type of tensors to use. If str, should be one of the values of the enum TensorType. If None, no modification is done.
Convert the inner content to tensors.
to
< source >
( *args **kwargs ) → BatchFeature
Parameters
args (Tuple) — Will be passed to the to(...) function of the tensors.
kwargs (Dict, optional) — Will be passed to the to(...) function of the tensors.
The same instance after modification.
Send all values to device by calling v.to(*args, **kwargs) (PyTorch only). This should support casting in different dtypes and sending the BatchFeature to a different device.
ImageFeatureExtractionMixin
Mixin that contain utilities for preparing image features.
( image size ) → new_image
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor of shape (n_channels, height, width) or (height, width, n_channels)) — The image to resize.
size (int or Tuple[int, int]) — The size to which crop the image.
A center cropped PIL.Image.Image or np.ndarray or torch.Tensor of shape: (n_channels, height, width).
Crops image to the given size using a center crop. Note that if the image is too small to be cropped to the size given, it will be padded (so the returned result has the size asked).
( image )
Parameters
image (PIL.Image.Image) — The image to convert.
Converts PIL.Image.Image to RGB format.
expand_dims
< source >
( image )
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to expand.
Expands 2-dimensional image to 3 dimensions.
( image )
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image whose color channels to flip. If np.ndarray or torch.Tensor, the channel dimension should be first.
Flips the channel order of image from RGB to BGR, or vice versa. Note that this will trigger a conversion of image to a NumPy array if it’s a PIL Image.
( image mean std rescale = False )
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to normalize.
mean (List[float] or np.ndarray or torch.Tensor) — The mean (per channel) to use for normalization.
std (List[float] or np.ndarray or torch.Tensor) — The standard deviation (per channel) to use for normalization.
rescale (bool, optional, defaults to False) — Whether or not to rescale the image to be between 0 and 1. If a PIL image is provided, scaling will happen automatically.
Normalizes image with mean and std. Note that this will trigger a conversion of image to a NumPy array if it’s a PIL Image.
( image: ndarray scale: typing.Union[float, int] )
Rescale a numpy image by scale amount
( image size resample = None default_to_square = True max_size = None ) → image
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to resize.
size (int or Tuple[int, int]) — The size to use for resizing the image. If size is a sequence like (h, w), output size will be matched to this.
If size is an int and default_to_square is True, then image will be resized to (size, size). If size is an int and default_to_square is False, then smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).
resample (int, optional, defaults to PILImageResampling.BILINEAR) — The filter to user for resampling.
default_to_square (bool, optional, defaults to True) — How to convert size when it is a single int. If set to True, the size will be converted to a square (size,size). If set to False, will replicate torchvision.transforms.Resize with support for resizing only the smallest edge and providing an optional max_size.
max_size (int, optional, defaults to None) — The maximum allowed for the longer edge of the resized image: if the longer edge of the image is greater than max_size after being resized according to size, then the image is resized again so that the longer edge is equal to max_size. As a result, size might be overruled, i.e the smaller edge may be shorter than size. Only used if default_to_square is False.
A resized PIL.Image.Image.
Resizes image. Enforces conversion of input to PIL.Image.
( image angle resample = None expand = 0 center = None translate = None fillcolor = None ) → image
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to rotate. If np.ndarray or torch.Tensor, will be converted to PIL.Image.Image before rotating.
A rotated PIL.Image.Image.
Returns a rotated copy of image. This method returns a copy of image, rotated the given number of degrees counter clockwise around its centre.
( image rescale = None channel_first = True )
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to convert to a NumPy array.
rescale (bool, optional) — Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.). Will default to True if the image is a PIL Image or an array/tensor of integers, False otherwise.
channel_first (bool, optional, defaults to True) — Whether or not to permute the dimensions of the image to put the channel dimension first.
Converts image to a numpy array. Optionally rescales it and puts the channel dimension as the first dimension.
( image rescale = None )
Parameters
image (PIL.Image.Image or numpy.ndarray or torch.Tensor) — The image to convert to the PIL Image format.
rescale (bool, optional) — Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will default to True if the image type is a floating type, False otherwise.
Converts image to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if needed. |
https://huggingface.co/docs/transformers/main_classes/image_processor | Transformers documentation
Image Processor
Image Processor
An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks.
ImageProcessingMixin
class transformers.ImageProcessingMixin
< source >
( **kwargs )
This is an image processor mixin used to provide saving/loading functionality for sequential and image feature extractors.
from_pretrained
< source >
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — This can be either:
a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a image processor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved image processor JSON file, e.g., ./my_model_directory/preprocessor_config.json.
cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used.
force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist.
resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
Instantiate a type of ImageProcessingMixin from an image processor.
Examples:
image_processor = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32"
)
image_processor = CLIPImageProcessor.from_pretrained(
"./test/saved_model/"
)
image_processor = CLIPImageProcessor.from_pretrained("./test/saved_model/preprocessor_config.json")
image_processor = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32", do_normalize=False, foo=False
)
assert image_processor.do_normalize is False
image_processor, unused_kwargs = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32", do_normalize=False, foo=False, return_unused_kwargs=True
)
assert image_processor.do_normalize is False
assert unused_kwargs == {"foo": False}
save_pretrained
< source >
( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs )
Parameters
save_directory (str or os.PathLike) — Directory where the image processor JSON file will be saved (will be created if it does not exist).
push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method.
Save an image processor object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method.
BatchFeature
class transformers.BatchFeature
< source >
( data: typing.Union[typing.Dict[str, typing.Any], NoneType] = None tensor_type: typing.Union[NoneType, str, transformers.utils.generic.TensorType] = None )
Parameters
data (dict) — Dictionary of lists/arrays/tensors returned by the call/pad methods (‘input_values’, ‘attention_mask’, etc.).
tensor_type (Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
Holds the output of the pad() and feature extractor specific __call__ methods.
This class is derived from a python dictionary and can be used as a dictionary.
convert_to_tensors
< source >
( tensor_type: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None )
Parameters
tensor_type (str or TensorType, optional) — The type of tensors to use. If str, should be one of the values of the enum TensorType. If None, no modification is done.
Convert the inner content to tensors.
to
< source >
( *args **kwargs ) → BatchFeature
Parameters
args (Tuple) — Will be passed to the to(...) function of the tensors.
kwargs (Dict, optional) — Will be passed to the to(...) function of the tensors.
The same instance after modification.
Send all values to device by calling v.to(*args, **kwargs) (PyTorch only). This should support casting in different dtypes and sending the BatchFeature to a different device.
BaseImageProcessor
class transformers.image_processing_utils.BaseImageProcessor
< source >
( **kwargs )
center_crop
< source >
( image: ndarray size: typing.Dict[str, int] data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs )
Parameters
image (np.ndarray) — Image to center crop.
size (Dict[str, int]) — Size of the output image.
data_format (str or ChannelDimension, optional) — The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Center crop an image to (size["height"], size["width"]). If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped.
normalize
< source >
( image: ndarray mean: typing.Union[float, typing.Iterable[float]] std: typing.Union[float, typing.Iterable[float]] data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs ) → np.ndarray
Parameters
image (np.ndarray) — Image to normalize.
mean (float or Iterable[float]) — Image mean to use for normalization.
std (float or Iterable[float]) — Image standard deviation to use for normalization.
data_format (str or ChannelDimension, optional) — The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
The normalized image.
Normalize an image. image = (image - image_mean) / image_std.
rescale
< source >
( image: ndarray scale: float data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs ) → np.ndarray
Parameters
image (np.ndarray) — Image to rescale.
scale (float) — The scaling factor to rescale pixel values by.
data_format (str or ChannelDimension, optional) — The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
The rescaled image.
Rescale an image by a scale factor. image = image * scale. |
https://huggingface.co/docs/transformers/main_classes/tokenizer | Tokenizer
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library 🤗 Tokenizers. The “Fast” implementations allows:
a significant speed-up in particular when doing batched tokenization and
additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).
The base classes PreTrainedTokenizer and PreTrainedTokenizerFast implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and “Fast” tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository). They both rely on PreTrainedTokenizerBase that contains the common methods, and SpecialTokensMixin.
PreTrainedTokenizer and PreTrainedTokenizerFast thus implement the main methods for using all the tokenizers:
Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e., tokenizing and converting to integers).
Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece…).
Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization.
BatchEncoding holds the output of the PreTrainedTokenizerBase’s encoding methods (__call__, encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (input_ids, attention_mask…). When the tokenizer is a “Fast” tokenizer (i.e., backed by HuggingFace tokenizers library), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token).
PreTrainedTokenizer
class transformers.PreTrainedTokenizer
< source >
( **kwargs )
Parameters
model_max_length (int, optional) — The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).
padding_side (str, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
truncation_side (str, optional) — The side on which the model should have truncation applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
chat_template (str, optional) — A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description.
model_input_names (List[string], optional) — The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.
bos_token (str or tokenizers.AddedToken, optional) — A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.
eos_token (str or tokenizers.AddedToken, optional) — A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.
unk_token (str or tokenizers.AddedToken, optional) — A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.
sep_token (str or tokenizers.AddedToken, optional) — A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.
pad_token (str or tokenizers.AddedToken, optional) — A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.
cls_token (str or tokenizers.AddedToken, optional) — A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.
mask_token (str or tokenizers.AddedToken, optional) — A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.
additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) — A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with skip_special_tokens is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary.
clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process.
split_special_tokens (bool, optional, defaults to False) — Whether or not the special tokens should be split during the tokenization process. The default behavior is to not split special tokens. This means that if <s> is the bos_token, then tokenizer.tokenize("<s>") = ['<s>]. Otherwise, if split_special_tokens=True, then tokenizer.tokenize("<s>") will be give ['<', 's', '>']. This argument is only supported for slow tokenizers for the moment.
Base class for all slow tokenizers.
Inherits from PreTrainedTokenizerBase.
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes)
vocab_files_names (Dict[str, str]) — A dictionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).
pretrained_vocab_files_map (Dict[str, Dict[str, str]]) — A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.
max_model_input_sizes (Dict[str, Optional[int]]) — A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.
pretrained_init_configuration (Dict[str, Dict[str, Any]]) — A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.
model_input_names (List[str]) — A list of inputs expected in the forward pass of the model.
padding_side (str) — The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.
truncation_side (str) — The default value for the side on which the model should have truncation applied. Should be 'right' or 'left'.
__call__
< source >
( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
apply_chat_template
< source >
( conversation: typing.Union[typing.List[typing.Dict[str, str]], ForwardRef('Conversation')] chat_template: typing.Optional[str] = None tokenize: bool = True padding: bool = False truncation: bool = False max_length: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **tokenizer_kwargs ) → List[int]
Parameters
conversation (Union[List[Dict[str, str]], “Conversation”]) — A Conversation object or list of dicts with “role” and “content” keys, representing the chat history so far.
chat_template (str, optional) — A Jinja template to use for this conversion. If this is not passed, the model’s default chat template will be used instead.
tokenize (bool, defaults to True) — Whether to tokenize the output. If False, the output will be a string.
padding (bool, defaults to False) — Whether to pad sequences to the maximum length. Has no effect if tokenize is False.
truncation (bool, defaults to False) — Whether to truncate sequences at the maximum length. Has no effect if tokenize is False.
max_length (int, optional) — Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is False. If not specified, the tokenizer’s max_length attribute will be used as a default.
return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Has no effect if tokenize is False. Acceptable values are:
'tf': Return TensorFlow tf.Tensor objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return NumPy np.ndarray objects.
'jax': Return JAX jnp.ndarray objects. **tokenizer_kwargs — Additional kwargs to pass to the tokenizer.
A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate().
Converts a Conversation object or a list of dictionaries with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting. When chat_template is None, it will fall back to the default_chat_template specified at the class level.
batch_decode
< source >
( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → List[str]
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
< source >
( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → str
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
encode
< source >
( text: typing.Union[str, typing.List[str], typing.List[int]] text_pair: typing.Union[str, typing.List[str], typing.List[int], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → List[int], torch.Tensor, tf.Tensor or np.ndarray
Parameters
text (str, List[str] or List[int]) — The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
text_pair (str, List[str] or List[int], optional) — Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
**kwargs — Passed along to the .tokenize() method.
Returns
List[int], torch.Tensor, tf.Tensor or np.ndarray
The tokenized ids of the text.
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text)).
push_to_hub
< source >
( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs )
Parameters
repo_id (str) — The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization.
use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
commit_message (str, optional) — Message to commit while pushing. Will default to "Upload tokenizer".
private (bool, optional) — Whether or not the repository created should be private.
token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB").
create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization.
revision (str, optional) — Branch to push the uploaded files to.
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenizer.push_to_hub("my-finetuned-bert")
tokenizer.push_to_hub("huggingface/my-finetuned-bert")
convert_ids_to_tokens
< source >
( ids: typing.Union[int, typing.List[int]] skip_special_tokens: bool = False ) → str or List[str]
Parameters
ids (int or List[int]) — The token id (or token ids) to convert to tokens.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
The decoded token(s).
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
convert_tokens_to_ids
< source >
( tokens: typing.Union[str, typing.List[str]] ) → int or List[int]
Parameters
tokens (str or List[str]) — One or several token(s) to convert to token id(s).
The token id or list of token ids.
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
Returns the added tokens in the vocabulary as a dictionary of token to index. Results might be different from the fast call because for now we always add the tokens even if they are already in the vocabulary. This is something we should change.
num_special_tokens_to_add
< source >
( pair: bool = False ) → int
Parameters
pair (bool, optional, defaults to False) — Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.
Number of special tokens added to sequences.
Returns the number of added tokens when encoding a sequence with special tokens.
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
prepare_for_tokenization
< source >
( text: str is_split_into_words: bool = False **kwargs ) → Tuple[str, Dict[str, Any]]
Parameters
text (str) — The text to prepare.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
kwargs (Dict[str, Any], optional) — Keyword arguments to use for the tokenization.
Returns
Tuple[str, Dict[str, Any]]
The prepared text and the unused kwargs.
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining kwargs as well. We test the kwargs at the end of the encoding process to be sure all the arguments have been used.
tokenize
< source >
( text: str **kwargs ) → List[str]
Parameters
text (str) — The sequence to be encoded.
**kwargs (additional keyword arguments) — Passed along to the model-specific prepare_for_tokenization preprocessing method.
The list of tokens.
Converts a string in a sequence of tokens, using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens.
PreTrainedTokenizerFast
The PreTrainedTokenizerFast depend on the tokenizers library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the Using tokenizers from 🤗 tokenizers page to understand how this is done.
class transformers.PreTrainedTokenizerFast
< source >
( *args **kwargs )
Parameters
model_max_length (int, optional) — The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).
padding_side (str, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
truncation_side (str, optional) — The side on which the model should have truncation applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
chat_template (str, optional) — A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description.
model_input_names (List[string], optional) — The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.
bos_token (str or tokenizers.AddedToken, optional) — A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.
eos_token (str or tokenizers.AddedToken, optional) — A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.
unk_token (str or tokenizers.AddedToken, optional) — A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.
sep_token (str or tokenizers.AddedToken, optional) — A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.
pad_token (str or tokenizers.AddedToken, optional) — A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.
cls_token (str or tokenizers.AddedToken, optional) — A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.
mask_token (str or tokenizers.AddedToken, optional) — A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.
additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) — A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with skip_special_tokens is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary.
clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process.
split_special_tokens (bool, optional, defaults to False) — Whether or not the special tokens should be split during the tokenization process. The default behavior is to not split special tokens. This means that if <s> is the bos_token, then tokenizer.tokenize("<s>") = ['<s>]. Otherwise, if split_special_tokens=True, then tokenizer.tokenize("<s>") will be give ['<', 's', '>']. This argument is only supported for slow tokenizers for the moment.
tokenizer_object (tokenizers.Tokenizer) — A tokenizers.Tokenizer object from 🤗 tokenizers to instantiate from. See Using tokenizers from 🤗 tokenizers for more information.
tokenizer_file (str) — A path to a local JSON file representing a previously serialized tokenizers.Tokenizer object from 🤗 tokenizers.
Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).
Inherits from PreTrainedTokenizerBase.
Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.
This class also contains the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).
Class attributes (overridden by derived classes)
vocab_files_names (Dict[str, str]) — A dictionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).
pretrained_vocab_files_map (Dict[str, Dict[str, str]]) — A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.
max_model_input_sizes (Dict[str, Optional[int]]) — A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.
pretrained_init_configuration (Dict[str, Dict[str, Any]]) — A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.
model_input_names (List[str]) — A list of inputs expected in the forward pass of the model.
padding_side (str) — The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.
truncation_side (str) — The default value for the side on which the model should have truncation applied. Should be 'right' or 'left'.
__call__
< source >
( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
apply_chat_template
< source >
( conversation: typing.Union[typing.List[typing.Dict[str, str]], ForwardRef('Conversation')] chat_template: typing.Optional[str] = None tokenize: bool = True padding: bool = False truncation: bool = False max_length: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **tokenizer_kwargs ) → List[int]
Parameters
conversation (Union[List[Dict[str, str]], “Conversation”]) — A Conversation object or list of dicts with “role” and “content” keys, representing the chat history so far.
chat_template (str, optional) — A Jinja template to use for this conversion. If this is not passed, the model’s default chat template will be used instead.
tokenize (bool, defaults to True) — Whether to tokenize the output. If False, the output will be a string.
padding (bool, defaults to False) — Whether to pad sequences to the maximum length. Has no effect if tokenize is False.
truncation (bool, defaults to False) — Whether to truncate sequences at the maximum length. Has no effect if tokenize is False.
max_length (int, optional) — Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is False. If not specified, the tokenizer’s max_length attribute will be used as a default.
return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Has no effect if tokenize is False. Acceptable values are:
'tf': Return TensorFlow tf.Tensor objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return NumPy np.ndarray objects.
'jax': Return JAX jnp.ndarray objects. **tokenizer_kwargs — Additional kwargs to pass to the tokenizer.
A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate().
Converts a Conversation object or a list of dictionaries with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting. When chat_template is None, it will fall back to the default_chat_template specified at the class level.
batch_decode
< source >
( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → List[str]
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
< source >
( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → str
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
encode
< source >
( text: typing.Union[str, typing.List[str], typing.List[int]] text_pair: typing.Union[str, typing.List[str], typing.List[int], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → List[int], torch.Tensor, tf.Tensor or np.ndarray
Parameters
text (str, List[str] or List[int]) — The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
text_pair (str, List[str] or List[int], optional) — Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
**kwargs — Passed along to the .tokenize() method.
Returns
List[int], torch.Tensor, tf.Tensor or np.ndarray
The tokenized ids of the text.
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text)).
push_to_hub
< source >
( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs )
Parameters
repo_id (str) — The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization.
use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
commit_message (str, optional) — Message to commit while pushing. Will default to "Upload tokenizer".
private (bool, optional) — Whether or not the repository created should be private.
token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB").
create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization.
revision (str, optional) — Branch to push the uploaded files to.
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenizer.push_to_hub("my-finetuned-bert")
tokenizer.push_to_hub("huggingface/my-finetuned-bert")
convert_ids_to_tokens
< source >
( ids: typing.Union[int, typing.List[int]] skip_special_tokens: bool = False ) → str or List[str]
Parameters
ids (int or List[int]) — The token id (or token ids) to convert to tokens.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
The decoded token(s).
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.
convert_tokens_to_ids
< source >
( tokens: typing.Union[str, typing.List[str]] ) → int or List[int]
Parameters
tokens (str or List[str]) — One or several token(s) to convert to token id(s).
The token id or list of token ids.
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.
Returns the added tokens in the vocabulary as a dictionary of token to index.
num_special_tokens_to_add
< source >
( pair: bool = False ) → int
Parameters
pair (bool, optional, defaults to False) — Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence.
Number of special tokens added to sequences.
Returns the number of added tokens when encoding a sequence with special tokens.
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.
set_truncation_and_padding
< source >
( padding_strategy: PaddingStrategy truncation_strategy: TruncationStrategy max_length: int stride: int pad_to_multiple_of: typing.Optional[int] )
Parameters
padding_strategy (PaddingStrategy) — The kind of padding that will be applied to the input
truncation_strategy (TruncationStrategy) — The kind of truncation that will be applied to the input
max_length (int) — The maximum size of a sequence.
stride (int) — The stride to use when handling overflow.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.
The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.
train_new_from_iterator
< source >
( text_iterator vocab_size length = None new_special_tokens = None special_tokens_map = None **kwargs ) → PreTrainedTokenizerFast
Parameters
text_iterator (generator of List[str]) — The training corpus. Should be a generator of batches of texts, for instance a list of lists of texts if you have everything in memory.
vocab_size (int) — The size of the vocabulary you want for your tokenizer.
length (int, optional) — The total number of sequences in the iterator. This is used to provide meaningful progress tracking
new_special_tokens (list of str or AddedToken, optional) — A list of new special tokens to add to the tokenizer you are training.
special_tokens_map (Dict[str, str], optional) — If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special token name to new special token name in this argument.
kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.
A new tokenizer of the same type as the original one, trained on text_iterator.
Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one.
BatchEncoding
class transformers.BatchEncoding
< source >
( data: typing.Union[typing.Dict[str, typing.Any], NoneType] = None encoding: typing.Union[tokenizers.Encoding, typing.Sequence[tokenizers.Encoding], NoneType] = None tensor_type: typing.Union[NoneType, str, transformers.utils.generic.TensorType] = None prepend_batch_axis: bool = False n_sequences: typing.Optional[int] = None )
Parameters
data (dict) — Dictionary of lists/arrays/tensors returned by the __call__/encode_plus/batch_encode_plus methods (‘input_ids’, ‘attention_mask’, etc.).
encoding (tokenizers.Encoding or Sequence[tokenizers.Encoding], optional) — If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character space to token space the tokenizers.Encoding instance or list of instance (for batches) hold this information.
tensor_type (Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
prepend_batch_axis (bool, optional, defaults to False) — Whether or not to add a batch axis when converting to tensors (see tensor_type above).
n_sequences (Optional[int], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
Holds the output of the call(), encode_plus() and batch_encode_plus() methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.
char_to_token
< source >
( batch_or_char_index: int char_index: typing.Optional[int] = None sequence_index: int = 0 ) → int
Parameters
batch_or_char_index (int) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence
char_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.
sequence_index (int, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.
Index of the token.
Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.
Can be called as:
self.char_to_token(char_index) if batch size is 1
self.char_to_token(batch_index, char_index) if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
char_to_word
< source >
( batch_or_char_index: int char_index: typing.Optional[int] = None sequence_index: int = 0 ) → int or List[int]
Parameters
batch_or_char_index (int) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the character in the original string.
char_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the character in the original string.
sequence_index (int, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided character index belongs to.
Index or indices of the associated encoded token(s).
Get the word in the original string corresponding to a character in the original string of a sequence of the batch.
Can be called as:
self.char_to_word(char_index) if batch size is 1
self.char_to_word(batch_index, char_index) if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
convert_to_tensors
< source >
( tensor_type: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None prepend_batch_axis: bool = False )
Parameters
tensor_type (str or TensorType, optional) — The type of tensors to use. If str, should be one of the values of the enum TensorType. If None, no modification is done.
prepend_batch_axis (int, optional, defaults to False) — Whether or not to add the batch dimension during the conversion.
Convert the inner content to tensors.
sequence_ids
< source >
( batch_index: int = 0 ) → List[Optional[int]]
Parameters
batch_index (int, optional, defaults to 0) — The index to access in the batch.
Returns
List[Optional[int]]
A list indicating the sequence id corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding sequence.
Return a list mapping the tokens to the id of their original sentences:
None for special tokens added around or between sequences,
0 for tokens corresponding to words in the first sequence,
1 for tokens corresponding to words in the second sequence when a pair of sequences was jointly encoded.
to
< source >
( device: typing.Union[str, ForwardRef('torch.device')] ) → BatchEncoding
Parameters
device (str or torch.device) — The device to put the tensors on.
The same instance after modification.
Send all values to device by calling v.to(device) (PyTorch only).
token_to_chars
< source >
( batch_or_token_index: int token_index: typing.Optional[int] = None ) → CharSpan
Parameters
batch_or_token_index (int) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.
token_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token or tokens in the sequence.
Span of characters in the original string, or None, if the token (e.g. , ) doesn’t correspond to any chars in the origin string.
Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a CharSpan with:
start — Index of the first character in the original string associated to the token.
end — Index of the character following the last character in the original string associated to the token.
Can be called as:
self.token_to_chars(token_index) if batch size is 1
self.token_to_chars(batch_index, token_index) if batch size is greater or equal to 1
token_to_sequence
< source >
( batch_or_token_index: int token_index: typing.Optional[int] = None ) → int
Parameters
batch_or_token_index (int) — Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the token in the sequence.
token_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.
Index of the word in the input sequence.
Get the index of the sequence represented by the given token. In the general use case, this method returns 0 for a single sequence or the first sequence of a pair, and 1 for the second sequence of a pair
Can be called as:
self.token_to_sequence(token_index) if batch size is 1
self.token_to_sequence(batch_index, token_index) if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
token_to_word
< source >
( batch_or_token_index: int token_index: typing.Optional[int] = None ) → int
Parameters
batch_or_token_index (int) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the token in the sequence.
token_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the token in the sequence.
Index of the word in the input sequence.
Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.
Can be called as:
self.token_to_word(token_index) if batch size is 1
self.token_to_word(batch_index, token_index) if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
tokens
< source >
( batch_index: int = 0 ) → List[str]
Parameters
batch_index (int, optional, defaults to 0) — The index to access in the batch.
The list of tokens at that index.
Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to integer indices) at a given batch index (only works for the output of a fast tokenizer).
word_ids
< source >
( batch_index: int = 0 ) → List[Optional[int]]
Parameters
batch_index (int, optional, defaults to 0) — The index to access in the batch.
Returns
List[Optional[int]]
A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
word_to_chars
< source >
( batch_or_word_index: int word_index: typing.Optional[int] = None sequence_index: int = 0 ) → CharSpan or List[CharSpan]
Parameters
batch_or_word_index (int) — Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of the word in the sequence
word_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.
sequence_index (int, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.
Returns
CharSpan or List[CharSpan]
Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:
start: index of the first character associated to the token in the original string
end: index of the character following the last character associated to the token in the original string
Get the character span in the original string corresponding to given word in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with:
start: index of the first character in the original string
end: index of the character following the last character in the original string
Can be called as:
self.word_to_chars(word_index) if batch size is 1
self.word_to_chars(batch_index, word_index) if batch size is greater or equal to 1
word_to_tokens
< source >
( batch_or_word_index: int word_index: typing.Optional[int] = None sequence_index: int = 0 ) → (TokenSpan, optional)
Parameters
batch_or_word_index (int) — Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of the word in the sequence.
word_index (int, optional) — If a batch index is provided in batch_or_token_index, this can be the index of the word in the sequence.
sequence_index (int, optional, defaults to 0) — If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0 or 1) the provided word index belongs to.
Returns
(TokenSpan, optional)
Span of tokens in the encoded sequence. Returns None if no tokens correspond to the word. This can happen especially when the token is a special token that has been used to format the tokenization. For example when we add a class token at the very beginning of the tokenization.
Get the encoded token span corresponding to a word in a sequence of the batch.
Token spans are returned as a TokenSpan with:
start — Index of the first token.
end — Index of the token following the last token.
Can be called as:
self.word_to_tokens(word_index, sequence_index: int = 0) if batch size is 1
self.word_to_tokens(batch_index, word_index, sequence_index: int = 0) if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.
words
< source >
( batch_index: int = 0 ) → List[Optional[int]]
Parameters
batch_index (int, optional, defaults to 0) — The index to access in the batch.
Returns
List[Optional[int]]
A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer. |
https://huggingface.co/docs/transformers/main_classes/deepspeed | DeepSpeed Integration
DeepSpeed implements everything described in the ZeRO paper. Currently it provides full support for:
Optimizer state partitioning (ZeRO stage 1)
Gradient partitioning (ZeRO stage 2)
Parameter partitioning (ZeRO stage 3)
Custom mixed precision training handling
A range of fast CUDA-extension-based optimizers
ZeRO-Offload to CPU and NVMe
ZeRO-Offload has its own dedicated paper: ZeRO-Offload: Democratizing Billion-Scale Model Training. And NVMe-support is described in the paper ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.
DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference.
DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which won’t be possible on a single GPU.
🤗 Transformers integrates DeepSpeed via 2 options:
Integration of the core DeepSpeed features via Trainer. This is an everything-done-for-you type of integration - just supply your custom config file or use our template and you have nothing else to do. Most of this document is focused on this feature.
If you don’t use Trainer and want to use your own Trainer where you integrated DeepSpeed yourself, core functionality functions like from_pretrained and from_config include integration of essential parts of DeepSpeed like zero.Init for ZeRO stage 3 and higher. To tap into this feature read the docs on non-Trainer DeepSpeed Integration.
What is integrated:
Training:
DeepSpeed ZeRO training supports the full ZeRO stages 1, 2 and 3 with ZeRO-Infinity (CPU and NVME offload).
Inference:
DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but it doesn’t use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see: zero-inference.
There is also DeepSpeed Inference - this is a totally different technology which uses Tensor Parallelism instead of ZeRO (coming soon).
Trainer Deepspeed Integration
Installation
Install the library via pypi:
or via transformers’ extras:
pip install transformers[deepspeed]
or find more details on the DeepSpeed’s GitHub page and advanced install.
If you’re still struggling with the build, first make sure to read CUDA Extension Installation Notes.
If you don’t prebuild the extensions and rely on them to be built at run time and you tried all of the above solutions to no avail, the next thing to try is to pre-build the modules before installing them.
To make a local build for DeepSpeed:
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \
--global-option="build_ext" --global-option="-j8" --no-cache -v \
--disable-pip-version-check 2>&1 | tee build.log
If you intend to use NVMe offload you will also need to include DS_BUILD_AIO=1 in the instructions above (and also install libaio-dev system-wide).
Edit TORCH_CUDA_ARCH_LIST to insert the code for the architectures of the GPU cards you intend to use. Assuming all your cards are the same you can get the arch via:
CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())"
So if you get 8, 6, then use TORCH_CUDA_ARCH_LIST="8.6". If you have multiple different cards, you can list all of them like so TORCH_CUDA_ARCH_LIST="6.1;8.6"
If you need to use the same setup on multiple machines, make a binary wheel:
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \
python setup.py build_ext -j8 bdist_wheel
it will generate something like dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl which now you can install as pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl locally or on any other machine.
Again, remember to ensure to adjust TORCH_CUDA_ARCH_LIST to the target architectures.
You can find the complete list of NVIDIA GPUs and their corresponding Compute Capabilities (same as arch in this context) here.
You can check the archs pytorch was built with using:
python -c "import torch; print(torch.cuda.get_arch_list())"
Here is how to find out the arch for one of the installed GPUs. For example, for GPU 0:
CUDA_VISIBLE_DEVICES=0 python -c "import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda')))"
If the output is:
_CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82)
then you know that this card’s arch is 8.6.
You can also leave TORCH_CUDA_ARCH_LIST out completely and then the build program will automatically query the architecture of the GPUs the build is made on. This may or may not match the GPUs on the target machines, that’s why it’s best to specify the desired archs explicitly.
If after trying everything suggested you still encounter build issues, please, proceed with the GitHub Issue of Deepspeed,
Deployment with multiple GPUs
To deploy the DeepSpeed integration adjust the Trainer command line arguments to include a new argument --deepspeed ds_config.json, where ds_config.json is the DeepSpeed configuration file as documented here. The file naming is up to you. It’s recommended to use DeepSpeed’s add_config_arguments utility to add the necessary command line arguments to your code. For more information please see DeepSpeed’s Argument Parsing doc.
You can use a launcher of your choice here. You can continue using the pytorch launcher:
torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json
or use the launcher provided by deepspeed:
deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json
As you can see the arguments aren’t the same, but for most needs either of them works. The full details on how to configure various nodes and GPUs can be found here.
When you use the deepspeed launcher and you want to use all available gpus you can just omit the --num_gpus flag.
Here is an example of running run_translation.py under DeepSpeed deploying all available GPUs:
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
Note that in the DeepSpeed documentation you are likely to see --deepspeed --deepspeed_config ds_config.json - i.e. two DeepSpeed-related arguments, but for the sake of simplicity, and since there are already so many arguments to deal with, we combined the two into a single argument.
For some practical usage examples, please, see this post.
Deployment with one GPU
To deploy DeepSpeed with one GPU adjust the Trainer command line arguments as follows:
deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
This is almost the same as with multiple-GPUs, but here we tell DeepSpeed explicitly to use just one GPU via --num_gpus=1. By default, DeepSpeed deploys all GPUs it can see on the given node. If you have only 1 GPU to start with, then you don’t need this argument. The following documentation discusses the launcher options.
Why would you want to use DeepSpeed with just one GPU?
It has a ZeRO-offload feature which can delegate some computations and memory to the host’s CPU and RAM, and thus leave more GPU resources for model’s needs - e.g. larger batch size, or enabling a fitting of a very big model which normally won’t fit.
It provides a smart GPU memory management system, that minimizes memory fragmentation, which again allows you to fit bigger models and data batches.
While we are going to discuss the configuration in details next, the key to getting a huge improvement on a single GPU with DeepSpeed is to have at least the following configuration in the configuration file:
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true
}
}
which enables optimizer offload and some other important features. You may experiment with the buffer sizes, you will find more details in the discussion below.
For a practical usage example of this type of deployment, please, see this post.
You may also try the ZeRO-3 with CPU and NVMe offload as explained further in this document.
Notes:
if you need to run on a specific GPU, which is different from GPU 0, you can’t use CUDA_VISIBLE_DEVICES to limit the visible scope of available GPUs. Instead, you have to use the following syntax:
deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ...
In this example, we tell DeepSpeed to use GPU 1 (second gpu).
Deployment with multiple Nodes
The information in this section isn’t not specific to the DeepSpeed integration and is applicable to any multi-node program. But DeepSpeed provides a deepspeed launcher that is easier to use than other launchers unless you are in a SLURM environment.
For the duration of this section let’s assume that you have 2 nodes with 8 gpus each. And you can reach the first node with ssh hostname1 and second node with ssh hostname2, and both must be able to reach each other via ssh locally without a password. Of course, you will need to rename these host (node) names to the actual host names you are working with.
The torch.distributed.run launcher
For example, to use torch.distributed.run, you could do:
python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \
--master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
You have to ssh to each node and run this same command on each one of them! There is no rush, the launcher will wait until both nodes will synchronize.
For more information please see torchrun. Incidentally, this is also the launcher that replaced torch.distributed.launch a few pytorch versions back.
The deepspeed launcher
To use the deepspeed launcher instead, you have to first create a hostfile file:
hostname1 slots=8
hostname2 slots=8
and then you can launch it as:
deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \
your_program.py <normal cl args> --deepspeed ds_config.json
Unlike the torch.distributed.run launcher, deepspeed will automatically launch this command on both nodes!
For more information please see Resource Configuration (multi-node).
Launching in a SLURM environment
In the SLURM environment the following approach can be used. The following is a slurm script launch.slurm which you will need to adapt it to your specific SLURM environment.
export GPUS_PER_NODE=8
export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export MASTER_PORT=9901
srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \ --master_addr $MASTER_ADDR --master_port $MASTER_PORT \ your_program.py <normal cl args> --deepspeed ds_config.json'
All is left is to schedule it to run:
srun will take care of launching the program simultaneously on all nodes.
Use of Non-shared filesystem
By default DeepSpeed expects that a multi-node environment uses a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a checkpoint_section with the following setting:
{
"checkpoint": {
"use_node_local_storage": true
}
}
Alternatively, you can also use the Trainer’s --save_on_each_node argument, and the above config will be added automatically for you.
Deployment in Notebooks
The problem with running notebook cells as a script is that there is no normal deepspeed launcher to rely on, so under certain setups we have to emulate it.
If you’re using only 1 GPU, here is how you’d have to adjust your training code in the notebook to use DeepSpeed.
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "9994"
os.environ["RANK"] = "0"
os.environ["LOCAL_RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json")
trainer = Trainer(...)
trainer.train()
Note: ... stands for the normal arguments that you’d pass to the functions.
If you want to use more than 1 GPU, you must use a multi-process environment for DeepSpeed to work. That is, you have to use the launcher for that purpose and this cannot be accomplished by emulating the distributed environment presented at the beginning of this section.
If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell with:
%%bash
cat <<'EOT' > ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
EOT
If the training script is in a normal file and not in the notebook cells, you can launch deepspeed normally via shell from a cell. For example, to use run_translation.py you would launch it with:
!git clone https://github.com/huggingface/transformers
!cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...
or with %%bash magic, where you can write a multi-line code for the shell program to run:
%%bash
git clone https://github.com/huggingface/transformers
cd transformers
deepspeed examples/pytorch/translation/run_translation.py ...
In such case you don’t need any of the code presented at the beginning of this section.
Note: While %%bash magic is neat, but currently it buffers the output so you won’t see the logs until the process completes.
Configuration
For the complete guide to the DeepSpeed configuration options that can be used in its configuration file please refer to the following documentation.
You can find dozens of DeepSpeed configuration examples that address various practical needs in the DeepSpeedExamples repo:
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
Continuing the code from above, let’s say you’re looking to configure the Lamb optimizer. So you can search through the example .json files with:
grep -i Lamb $(find . -name '*json')
Some more examples are to be found in the main repo as well.
When using DeepSpeed you always need to supply a DeepSpeed configuration file, yet some configuration parameters have to be configured via the command line. You will find the nuances in the rest of this guide.
To get an idea of what DeepSpeed configuration file looks like, here is one that activates ZeRO stage 2 features, including optimizer states cpu offload, uses AdamW optimizer and WarmupLR scheduler and will enable mixed precision training if --fp16 is passed:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
When you execute the program, DeepSpeed will log the configuration it received from the Trainer to the console, so you can see exactly what was the final configuration passed to it.
Passing Configuration
As discussed in this document normally the DeepSpeed configuration is passed as a path to a json file, but if you’re not using the command line interface to configure the training, and instead instantiate the Trainer via TrainingArguments then for the deepspeed argument you can pass a nested dict. This allows you to create the configuration on the fly and doesn’t require you to write it to the file system before passing it to TrainingArguments.
To summarize you can do:
TrainingArguments(..., deepspeed="/path/to/ds_config.json")
or:
ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params)
TrainingArguments(..., deepspeed=ds_config_dict)
Shared Configuration
This section is a must-read
Some configuration values are required by both the Trainer and DeepSpeed to function correctly, therefore, to prevent conflicting definitions, which could lead to hard to detect errors, we chose to configure those via the Trainer command line arguments.
Additionally, some configuration values are derived automatically based on the model’s configuration, so instead of remembering to manually adjust multiple values, it’s the best to let the Trainer do the majority of configuration for you.
Therefore, in the rest of this guide you will find a special configuration value: auto, which when set will be automatically replaced with the correct or most efficient value. Please feel free to choose to ignore this recommendation and set the values explicitly, in which case be very careful that your the Trainer arguments and DeepSpeed configurations agree. For example, are you using the same learning rate, or batch size, or gradient accumulation settings? if these mismatch the training may fail in very difficult to detect ways. You have been warned.
There are multiple other values that are specific to DeepSpeed-only and those you will have to set manually to suit your needs.
In your own programs, you can also use the following approach if you’d like to modify the DeepSpeed config as a master and configure TrainingArguments based on that. The steps are:
Create or load the DeepSpeed configuration to be used as a master configuration
Create the TrainingArguments object based on these values
Do note that some values, such as scheduler.params.total_num_steps are calculated by Trainer during train, but you can of course do the math yourself.
ZeRO
Zero Redundancy Optimizer (ZeRO) is the workhorse of DeepSpeed. It supports 3 different levels (stages) of optimization. The first one is not quite interesting for scalability purposes, therefore this document focuses on stages 2 and 3. Stage 3 is further improved by the latest addition of ZeRO-Infinity. You will find more indepth information in the DeepSpeed documentation.
The zero_optimization section of the configuration file is the most important part (docs), since that is where you define which ZeRO stages you want to enable and how to configure them. You will find the explanation for each parameter in the DeepSpeed docs.
This section has to be configured exclusively via DeepSpeed configuration - the Trainer provides no equivalent command line arguments.
Note: currently DeepSpeed doesn’t validate parameter names, so if you misspell any, it’ll use the default setting for the parameter that got misspelled. You can watch the DeepSpeed engine start up log messages to see what values it is going to use.
ZeRO-2 Config
The following is an example of configuration for ZeRO stage 2:
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
}
}
Performance tuning:
enabling offload_optimizer should reduce GPU RAM usage (it requires "stage": 2)
"overlap_comm": true trades off increased GPU RAM usage to lower all-reduce latency. overlap_comm uses 4.5x the allgather_bucket_size and reduce_bucket_size values. So if they are set to 5e8, this requires a 9GB footprint (5e8 x 2Bytes x 2 x 4.5). Therefore, if you have a GPU with 8GB or less RAM, to avoid getting OOM-errors you will need to reduce those parameters to about 2e8, which would require 3.6GB. You will want to do the same on larger capacity GPU as well, if you’re starting to hit OOM.
when reducing these buffers you’re trading communication speed to avail more GPU RAM. The smaller the buffer size is, the slower the communication gets, and the more GPU RAM will be available to other tasks. So if a bigger batch size is important, getting a slightly slower training time could be a good trade.
Additionally, deepspeed==0.4.4 added a new option round_robin_gradients which you can enable with:
{
"zero_optimization": {
"round_robin_gradients": true
}
}
This is a stage 2 optimization for CPU offloading that parallelizes gradient copying to CPU memory among ranks by fine-grained gradient partitioning. Performance benefit grows with gradient accumulation steps (more copying between optimizer steps) or GPU count (increased parallelism).
ZeRO-3 Config
The following is an example of configuration for ZeRO stage 3:
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
If you are getting OOMs, because your model or activations don’t fit into the GPU memory and you have unutilized CPU memory offloading the optimizer states and parameters to CPU memory with "device": "cpu" may solve this limitation. If you don’t want to offload to CPU memory, use none instead of cpu for the device entry. Offloading to NVMe is discussed further down.
Pinned memory is enabled with pin_memory set to true. This feature can improve the throughput at the cost of making less memory available to other processes. Pinned memory is set aside to the specific process that requested it and its typically accessed much faster than normal CPU memory.
Performance tuning:
stage3_max_live_parameters: 1e9
stage3_max_reuse_distance: 1e9
If hitting OOM reduce stage3_max_live_parameters and stage3_max_reuse_distance. They should have minimal impact on performance unless you are doing activation checkpointing. 1e9 would consume ~2GB. The memory is shared by stage3_max_live_parameters and stage3_max_reuse_distance, so it’s not additive, it’s just 2GB total.
stage3_max_live_parameters is the upper limit on how many full parameters you want to keep on the GPU at any given time. “reuse distance” is a metric we are using to figure out when will a parameter be used again in the future, and we use the stage3_max_reuse_distance to decide whether to throw away the parameter or to keep it. If a parameter is going to be used again in near future (less than stage3_max_reuse_distance) then we keep it to reduce communication overhead. This is super helpful when you have activation checkpointing enabled, where we do a forward recompute and backward passes a single layer granularity and want to keep the parameter in the forward recompute till the backward
The following configuration values depend on the model’s hidden size:
reduce_bucket_size: hidden_size*hidden_size
stage3_prefetch_bucket_size: 0.9 * hidden_size * hidden_size
stage3_param_persistence_threshold: 10 * hidden_size
therefore set these values to auto and the Trainer will automatically assign the recommended values. But, of course, feel free to set these explicitly as well.
stage3_gather_16bit_weights_on_model_save enables model fp16 weights consolidation when model gets saved. With large models and multiple GPUs this is an expensive operation both in terms of memory and speed. It’s currently required if you plan to resume the training. Watch out for future updates that will remove this limitation and make things more flexible.
If you’re migrating from ZeRO-2 configuration note that allgather_partitions, allgather_bucket_size and reduce_scatter configuration parameters are not used in ZeRO-3. If you keep these in the config file they will just be ignored.
sub_group_size: 1e9
sub_group_size controls the granularity in which parameters are updated during optimizer steps. Parameters are grouped into buckets of sub_group_size and each buckets is updated one at a time. When used with NVMe offload in ZeRO-Infinity, sub_group_size therefore controls the granularity in which model states are moved in and out of CPU memory from NVMe during the optimizer step. This prevents running out of CPU memory for extremely large models.
You can leave sub_group_size to its default value of 1e9 when not using NVMe offload. You may want to change its default value in the following cases:
Running into OOM during optimizer step: Reduce sub_group_size to reduce memory utilization of temporary buffers
Optimizer Step is taking a long time: Increase sub_group_size to improve bandwidth utilization as a result of the increased data buffers.
ZeRO-0 Config
Note that we’re listing Stage 0 and 1 last since they are rarely used.
Stage 0 is disabling all types of sharding and just using DeepSpeed as DDP. You can turn it on with:
{
"zero_optimization": {
"stage": 0
}
}
This will essentially disable ZeRO without you needing to change anything else.
ZeRO-1 Config
Stage 1 is Stage 2 minus gradient sharding. You can always try it to speed things a tiny bit to only shard the optimizer states with:
{
"zero_optimization": {
"stage": 1
}
}
NVMe Support
ZeRO-Infinity allows for training incredibly large models by extending GPU and CPU memory with NVMe memory. Thanks to smart partitioning and tiling algorithms each GPU needs to send and receive very small amounts of data during offloading so modern NVMe proved to be fit to allow for an even larger total memory pool available to your training process. ZeRO-Infinity requires ZeRO-3 enabled.
The following configuration example enables NVMe to offload both optimizer states and the params:
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
}
You can choose to offload both optimizer states and params to NVMe, or just one of them or none. For example, if you have copious amounts of CPU memory available, by all means offload to CPU memory only as it’d be faster (hint: “device”: “cpu”).
Here is the full documentation for offloading optimizer states and parameters.
Make sure that your nvme_path is actually an NVMe, since it will work with the normal hard drive or SSD, but it’ll be much much slower. The fast scalable training was designed with modern NVMe transfer speeds in mind (as of this writing one can have ~3.5GB/s read, ~3GB/s write peak speeds).
In order to figure out the optimal aio configuration block you must run a benchmark on your target setup, as explained here.
ZeRO-2 vs ZeRO-3 Performance
ZeRO-3 is likely to be slower than ZeRO-2 if everything else is configured the same because the former has to gather model weights in addition to what ZeRO-2 does. If ZeRO-2 meets your needs and you don’t need to scale beyond a few GPUs then you may choose to stick to it. It’s important to understand that ZeRO-3 enables a much higher scalability capacity at a cost of speed.
It’s possible to adjust ZeRO-3 configuration to make it perform closer to ZeRO-2:
set stage3_param_persistence_threshold to a very large number - larger than the largest parameter, e.g., 6 * hidden_size * hidden_size. This will keep the parameters on the GPUs.
turn off offload_params since ZeRO-2 doesn’t have that option.
The performance will likely improve significantly with just offload_params turned off, even if you don’t change stage3_param_persistence_threshold. Of course, these changes will impact the size of the model you can train. So these help you to trade scalability for speed depending on your needs.
ZeRO-2 Example
Here is a full ZeRO-2 auto-configuration file ds_config_zero2.json:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
Here is a full ZeRO-2 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple auto settings in it.
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
ZeRO-3 Example
Here is a full ZeRO-3 auto-configuration file ds_config_zero3.json:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
Here is a full ZeRO-3 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple auto settings in it.
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1e6,
"stage3_prefetch_bucket_size": 0.94e6,
"stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
How to Choose Which ZeRO Stage and Offloads To Use For Best Performance
So now you know there are all these different stages. How to decide which of them to use? This section will attempt to address this question.
In general the following applies:
Speed-wise (left is faster than right)
Stage 0 (DDP) > Stage 1 > Stage 2 > Stage 2 + offload > Stage 3 > Stage 3 + offloads
GPU Memory usage-wise (right is more GPU memory efficient than left)
Stage 0 (DDP) < Stage 1 < Stage 2 < Stage 2 + offload < Stage 3 < Stage 3 + offloads
So when you want to get the fastest execution while fitting into minimal number of GPUs, here is the process you could follow. We start with the fastest approach and if running into GPU OOM we then go to the next slower approach, but which will use less GPU memory. And so on and so forth.
First of all set batch size to 1 (you can always use gradient accumulation for any desired effective batch size).
Enable --gradient_checkpointing 1 (HF Trainer) or directly model.gradient_checkpointing_enable() - if OOM then
Try ZeRO stage 2 first. if OOM then
Try ZeRO stage 2 + offload_optimizer - if OOM then
Switch to ZeRO stage 3 - if OOM then
Enable offload_param to cpu - if OOM then
Enable offload_optimizer to cpu - if OOM then
If you still can’t fit a batch size of 1 first check various default values and lower them if you can. For example, if you use generate and you don’t use a wide search beam make it narrower as it’d take a lot of memory.
Definitely use mixed half-precision over fp32 - so bf16 on Ampere and higher GPUs and fp16 on older gpu architectures.
If you still OOM you could add more hardware or enable ZeRO-Infinity - that is switch offloads offload_param and offload_optimizer to nvme. You need to make sure it’s a very fast nvme. As an anecdote I was able to infer BLOOM-176B on a tiny GPU using ZeRO-Infinity except it was extremely slow. But it worked!
You can, of course, work through these steps in reverse by starting with the most GPU memory efficient config and then going backwards. Or try bi-secting it.
Once you have your batch size 1 not leading to OOM, measure your effective throughput.
Next try to increase the batch size to as large as you can, since the higher the batch size the more efficient the GPUs are as they perform the best when matrices they multiply are huge.
Now the performance optimization game starts. You can turn off some offload features or step down in ZeRO stages and increase/decrease batch size and again measure your effective throughput. Rinse and repeat until satisfied.
Don’t spend forever on it, but if you’re about to start a 3 months training - do spend a few days on it to find the most effective throughput-wise setup. So that your training cost will be the lowest and you will finish training faster. In the current crazy-paced ML world, if it takes you an extra month to train something you are likely to miss a golden opportunity. Of course, this is only me sharing an observation and in no way I’m trying to rush you. Before beginning to train BLOOM-176B I spent 2 days on this process and was able to increase throughput from 90 to 150 TFLOPs! This effort saved us more than one month of training time.
These notes were written primarily for the training mode, but they should mostly apply for inference as well. For example, during inference Gradient Checkpointing is a no-op since it is only useful during training. Additionally, we found out that if you are doing a multi-GPU inference and not using DeepSpeed-Inference, Accelerate should provide a superior performance.
Other quick related performance notes:
if you are training something from scratch always try to have tensors with shapes that are divisible by 16 (e.g. hidden size). For batch size try divisible by 2 at least. There are wave and tile quanitization divisibility that is hardware-specific if you want to squeeze even higher performance from your GPUs.
Activation Checkpointing or Gradient Checkpointing
Activation checkpointing and gradient checkpointing are two distinct terms that refer to the same methodology. It’s very confusing but this is how it is.
Gradient checkpointing allows one to trade speed for GPU memory, which either allows one to overcome a GPU OOM, or increase their batch size, which often leads to a better performance.
HF Transformers models don’t know anything about DeepSpeed’s activation checkpointing, so if you try to enable that feature in the DeepSpeed config file, nothing will happen.
Therefore you have two ways to take advantage of this very beneficial feature:
If you want to use a HF Transformers models you can do model.gradient_checkpointing_enable() or use --gradient_checkpointing in the HF Trainer, which will automatically enable this for you. torch.utils.checkpoint is used there.
If you write your own model and you want to use DeepSpeed’s activation checkpointing you can use the API prescribed there. You can also take the HF Transformers modeling code and replace torch.utils.checkpoint with the DeepSpeed’s API. The latter is more flexible since it allows you to offload the forward activations to the CPU memory instead of recalculating them.
Optimizer and Scheduler
As long as you don’t enable offload_optimizer you can mix and match DeepSpeed and HuggingFace schedulers and optimizers, with the exception of using the combination of HuggingFace scheduler and DeepSpeed optimizer:
| Combos | HF Scheduler | DS Scheduler | | HF Optimizer | Yes | Yes | | DS Optimizer | No | Yes |
It is possible to use a non-DeepSpeed optimizer when offload_optimizer is enabled, as long as it has both CPU and GPU implementation (except LAMB).
Optimizer
DeepSpeed’s main optimizers are Adam, AdamW, OneBitAdam, and Lamb. These have been thoroughly tested with ZeRO and are thus recommended to be used. It, however, can import other optimizers from torch. The full documentation is here.
If you don’t configure the optimizer entry in the configuration file, the Trainer will automatically set it to AdamW and will use the supplied values or the defaults for the following command line arguments: --learning_rate, --adam_beta1, --adam_beta2, --adam_epsilon and --weight_decay.
Here is an example of the auto-configured optimizer entry for AdamW:
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
}
}
Note that the command line arguments will set the values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when for example, the learning rate is set to different values in different places. Command line rules. The values that get overridden are:
lr with the value of --learning_rate
betas with the value of --adam_beta1 --adam_beta2
eps with the value of --adam_epsilon
weight_decay with the value of --weight_decay
Therefore please remember to tune the shared hyperparameters on the command line.
You can also set the values explicitly:
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
If you want to use another optimizer which is not listed above, you will have to add to the top level configuration.
{
"zero_allow_untested_optimizer": true
}
Similarly to AdamW, you can configure other officially supported optimizers. Just remember that those may have different config values. e.g. for Adam you will want weight_decay around 0.01.
Additionally, offload works the best when it’s used with Deepspeed’s CPU Adam optimizer. If you want to use a different optimizer with offload, since deepspeed==0.8.3 you need to also add:
{
"zero_force_ds_cpu_optimizer": false
}
to the top level configuration.
Scheduler
DeepSpeed supports LRRangeTest, OneCycle, WarmupLR and WarmupDecayLR learning rate schedulers. The full documentation is here.
Here is where the schedulers overlap between 🤗 Transformers and DeepSpeed:
WarmupLR via --lr_scheduler_type constant_with_warmup
WarmupDecayLR via --lr_scheduler_type linear. This is also the default value for --lr_scheduler_type, therefore, if you don’t configure the scheduler this is scheduler that will get configured by default.
If you don’t configure the scheduler entry in the configuration file, the Trainer will use the values of --lr_scheduler_type, --learning_rate and --warmup_steps or --warmup_ratio to configure a 🤗 Transformers version of it.
Here is an example of the auto-configured scheduler entry for WarmupLR:
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
Since “auto” is used the Trainer arguments will set the correct values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when, for example, the learning rate is set to different values in different places. Command line rules. The values that get set are:
warmup_min_lr with the value of 0.
warmup_max_lr with the value of --learning_rate.
warmup_num_steps with the value of --warmup_steps if provided. Otherwise will use --warmup_ratio multiplied by the number of training steps and rounded up.
total_num_steps with either the value of --max_steps or if it is not provided, derived automatically at run time based on the environment and the size of the dataset and other command line arguments (needed for WarmupDecayLR).
You can, of course, take over any or all of the configuration values and set those yourself:
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
For example, for WarmupDecayLR, you can use the following entry:
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"last_batch_iteration": -1,
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
and total_num_steps, warmup_max_lr, warmup_num_steps and total_num_steps will be set at loading time.
fp32 Precision
Deepspeed supports the full fp32 and the fp16 mixed precision.
Because of the much reduced memory needs and faster speed one gets with the fp16 mixed precision, the only time you will want to not use it is when the model you’re using doesn’t behave well under this training mode. Typically this happens when the model wasn’t pretrained in the fp16 mixed precision (e.g. often this happens with bf16-pretrained models). Such models may overflow or underflow leading to NaN loss. If this is your case then you will want to use the full fp32 mode, by explicitly disabling the otherwise default fp16 mixed precision mode with:
{
"fp16": {
"enabled": false,
}
}
If you’re using the Ampere-architecture based GPU, pytorch version 1.7 and higher will automatically switch to using the much more efficient tf32 format for some operations, but the results will still be in fp32. For details and benchmarks, please, see TensorFloat-32(TF32) on Ampere devices. The document includes instructions on how to disable this automatic conversion if for some reason you prefer not to use it.
With the 🤗 Trainer you can use --tf32 to enable it, or disable it with --tf32 0 or --no_tf32. By default the PyTorch default is used.
Automatic Mixed Precision
You can use automatic mixed precision with either a pytorch-like AMP way or the apex-like way:
fp16
To configure pytorch AMP-like mode with fp16 (float16) set:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
and the Trainer will automatically enable or disable it based on the value of args.fp16_backend. The rest of config values are up to you.
This mode gets enabled when --fp16 --fp16_backend amp or --fp16_full_eval command line args are passed.
You can also enable/disable this mode explicitly:
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
Here is the documentation.
bf16
If bf16 (bfloat16) is desired instead of fp16 then the following configuration section is to be used:
{
"bf16": {
"enabled": "auto"
}
}
bf16 has the same dynamic range as fp32 and thus doesn’t require loss scaling.
This mode gets enabled when --bf16 or --bf16_full_eval command line args are passed.
You can also enable/disable this mode explicitly:
{
"bf16": {
"enabled": true
}
}
As of deepspeed==0.6.0 the bf16 support is new and experimental.
If you use gradient accumulation with bf16-enabled, you need to be aware that it’ll accumulate gradients in bf16, which may not be what you want due to this format’s low precision, as it may lead to a lossy accumulation.
A work is being done to fix that and provide an option to use a higher precision dtype (fp16 or fp32).
NCCL Collectives
There is the dtype of the training regime and there is a separate dtype that is used for communication collectives like various reduction and gathering/scattering operations.
All gather/scatter ops are performed in the same dtype the data is in, so if you’re using bf16 training regime it gets gathered in bf16 - gathering is a non-lossy operation.
Various reduce operations can be quite lossy, for example when gradients are averaged across multiple-gpus, if the communications are done in fp16 or bf16 the outcome is likely be lossy - since when one ads multiple numbers in low precision the result isn’t exact. More so with bf16 as it has a lower precision than fp16. Often fp16 is good enough as the loss is minimal when averaging grads which are typically very small. Therefore, by default for half precision training fp16 is used as the default for reduction operations. But you have full control over this functionality and if you choose you can add a small overhead and ensure that reductions will be using fp32 as the accumulation dtype and only when the result is ready it’ll get downcast to the half precision dtype you’re training in.
In order to override the default you simply add a new configuration entry:
{
"communication_data_type": "fp32"
}
The valid values as of this writing are “fp16”, “bfp16”, “fp32”.
note: stage zero 3 had a bug with regards to bf16 comm dtype that was fixed in deepspeed==0.8.1
apex
To configure apex AMP-like mode set:
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
and the Trainer will automatically configure it based on the values of args.fp16_backend and args.fp16_opt_level.
This mode gets enabled when --fp16 --fp16_backend apex --fp16_opt_level 01 command line args are passed.
You can also configure this mode explicitly:
{
"amp": {
"enabled": true,
"opt_level": "O1"
}
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
Here is the documentation.
Batch Size
To configure batch size, use:
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
and the Trainer will automatically set train_micro_batch_size_per_gpu to the value of args.per_device_train_batch_size and train_batch_size to args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps.
You can also set the values explicitly:
{
"train_batch_size": 12,
"train_micro_batch_size_per_gpu": 4
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
Gradient Accumulation
To configure gradient accumulation set:
{
"gradient_accumulation_steps": "auto"
}
and the Trainer will automatically set it to the value of args.gradient_accumulation_steps.
You can also set the value explicitly:
{
"gradient_accumulation_steps": 3
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
Gradient Clipping
To configure gradient gradient clipping set:
{
"gradient_clipping": "auto"
}
and the Trainer will automatically set it to the value of args.max_grad_norm.
You can also set the value explicitly:
{
"gradient_clipping": 1.0
}
But then you’re on your own synchronizing the Trainer command line arguments and the DeepSpeed configuration.
Getting The Model Weights Out
As long as you continue training and resuming using DeepSpeed you don’t need to worry about anything. DeepSpeed stores fp32 master weights in its custom checkpoint optimizer files, which are global_step*/*optim_states.pt (this is glob pattern), and are saved under the normal checkpoint.
FP16 Weights:
When a model is saved under ZeRO-2, you end up having the normal pytorch_model.bin file with the model weights, but they are only the fp16 version of the weights.
Under ZeRO-3, things are much more complicated, since the model weights are partitioned out over multiple GPUs, therefore "stage3_gather_16bit_weights_on_model_save": true is required to get the Trainer to save the fp16 version of the weights. If this setting is False pytorch_model.bin won’t be created. This is because by default DeepSpeed’s state_dict contains a placeholder and not the real weights. If we were to save this state_dict it won’t be possible to load it back.
{
"zero_optimization": {
"stage3_gather_16bit_weights_on_model_save": true
}
}
FP32 Weights:
While the fp16 weights are fine for resuming training, if you finished finetuning your model and want to upload it to the models hub or pass it to someone else you most likely will want to get the fp32 weights. This ideally shouldn’t be done during training since this is a process that requires a lot of memory, and therefore best to be performed offline after the training is complete. But if desired and you have plenty of free CPU memory it can be done in the same training script. The following sections will discuss both approaches.
Live FP32 Weights Recovery:
This approach may not work if you model is large and you have little free CPU memory left, at the end of the training.
If you have saved at least one checkpoint, and you want to use the latest one, you can do the following:
from transformers.trainer_utils import get_last_checkpoint
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = get_last_checkpoint(trainer.args.output_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
If you’re using the --load_best_model_at_end class:~transformers.TrainingArguments argument (to track the best checkpoint), then you can finish the training by first saving the final model explicitly and then do the same as above:
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final")
trainer.deepspeed.save_checkpoint(checkpoint_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
Note, that once load_state_dict_from_zero_checkpoint was run, the model will no longer be usable in the DeepSpeed context of the same application. i.e. you will need to re-initialize the deepspeed engine, since model.load_state_dict(state_dict) will remove all the DeepSpeed magic from it. So do this only at the very end of the training.
Of course, you don’t have to use class:~transformers.Trainer and you can adjust the examples above to your own trainer.
If for some reason you want more refinement, you can also extract the fp32 state_dict of the weights and apply these yourself as is shown in the following example:
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir)
model = model.cpu()
model.load_state_dict(state_dict)
Offline FP32 Weights Recovery:
DeepSpeed creates a special conversion script zero_to_fp32.py which it places in the top-level of the checkpoint folder. Using this script you can extract the weights at any point. The script is standalone and you no longer need to have the configuration file or a Trainer to do the extraction.
Let’s say your checkpoint folder looks like this:
$ ls -l output_dir/checkpoint-1/
-rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json
drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest
-rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt
-rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin
-rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt
-rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json
-rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model
-rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json
-rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
In this example there is just one DeepSpeed checkpoint sub-folder global_step1. Therefore to reconstruct the fp32 weights just run:
python zero_to_fp32.py . pytorch_model.bin
This is it. pytorch_model.bin will now contain the full fp32 model weights consolidated from multiple GPUs.
The script will automatically be able to handle either a ZeRO-2 or ZeRO-3 checkpoint.
python zero_to_fp32.py -h will give you usage details.
The script will auto-discover the deepspeed sub-folder using the contents of the file latest, which in the current example will contain global_step1.
Note: currently the script requires 2x general RAM of the final fp32 model weights.
ZeRO-3 and Infinity Nuances
ZeRO-3 is quite different from ZeRO-2 because of its param sharding feature.
ZeRO-Infinity further extends ZeRO-3 to support NVMe memory and multiple other speed and scalability improvements.
While all the efforts were made for things to just work without needing any special changes to your models, in certain circumstances you may find the following information to be needed.
Constructing Massive Models
DeepSpeed/ZeRO-3 can handle models with Trillions of parameters which may not fit onto the existing RAM. In such cases, but also if you want the initialization to happen much faster, initialize the model using deepspeed.zero.Init() context manager (which is also a function decorator), like so:
from transformers import T5ForConditionalGeneration, T5Config
import deepspeed
with deepspeed.zero.Init():
config = T5Config.from_pretrained("t5-small")
model = T5ForConditionalGeneration(config)
As you can see this gives you a randomly initialized model.
If you want to use a pretrained model, model_class.from_pretrained will activate this feature as long as is_deepspeed_zero3_enabled() returns True, which currently is setup by the TrainingArguments object if the passed DeepSpeed configuration file contains ZeRO-3 config section. Thus you must create the TrainingArguments object before calling from_pretrained. Here is an example of a possible sequence:
from transformers import AutoModel, Trainer, TrainingArguments
training_args = TrainingArguments(..., deepspeed=ds_config)
model = AutoModel.from_pretrained("t5-small")
trainer = Trainer(model=model, args=training_args, ...)
If you’re using the official example scripts and your command line arguments include --deepspeed ds_config.json with ZeRO-3 config enabled, then everything is already done for you, since this is how example scripts are written.
Note: If the fp16 weights of the model can’t fit onto the memory of a single GPU this feature must be used.
For full details on this method and other related features please refer to Constructing Massive Models.
Also when loading fp16-pretrained models, you will want to tell from_pretrained to use torch_dtype=torch.float16. For details, please, see from_pretrained-torch-dtype.
Gathering Parameters
Under ZeRO-3 on multiple GPUs no single GPU has all the parameters unless it’s the parameters for the currently executing layer. So if you need to access all parameters from all layers at once there is a specific method to do it. Most likely you won’t need it, but if you do please refer to Gathering Parameters
We do however use it internally in several places, one such example is when loading pretrained model weights in from_pretrained. We load one layer at a time and immediately partition it to all participating GPUs, as for very large models it won’t be possible to load it on one GPU and then spread it out to multiple GPUs, due to memory limitations.
Also under ZeRO-3, if you write your own code and run into a model parameter weight that looks like:
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
stress on tensor([1.]), or if you get an error where it says the parameter is of size 1, instead of some much larger multi-dimensional shape, this means that the parameter is partitioned and what you see is a ZeRO-3 placeholder.
ZeRO Inference
ZeRO Inference uses the same config as ZeRO-3 Training. You just don’t need the optimizer and scheduler sections. In fact you can leave these in the config file if you want to share the same one with the training. They will just be ignored.
Otherwise you just need to pass the usual TrainingArguments arguments. For example:
deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json
The only important thing is that you need to use a ZeRO-3 configuration, since ZeRO-2 provides no benefit whatsoever for the inference as only ZeRO-3 performs sharding of parameters, whereas ZeRO-1 shards gradients and optimizer states.
Here is an example of running run_translation.py under DeepSpeed deploying all available GPUs:
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path t5-small --output_dir output_dir \
--do_eval --max_eval_samples 50 --warmup_steps 50 \
--max_source_length 128 --val_max_target_length 128 \
--overwrite_output_dir --per_device_eval_batch_size 4 \
--predict_with_generate --dataset_config "ro-en" --fp16 \
--source_lang en --target_lang ro --dataset_name wmt16 \
--source_prefix "translate English to Romanian: "
Since for inference there is no need for additional large memory used by the optimizer states and the gradients you should be able to fit much larger batches and/or sequence length onto the same hardware.
Additionally DeepSpeed is currently developing a related product called Deepspeed-Inference which has no relationship to the ZeRO technology, but instead uses tensor parallelism to scale models that can’t fit onto a single GPU. This is a work in progress and we will provide the integration once that product is complete.
Memory Requirements
Since Deepspeed ZeRO can offload memory to CPU (and NVMe) the framework provides utils that allow one to tell how much CPU and GPU memory will be needed depending on the number of GPUs being used.
Let’s estimate how much memory is needed to finetune “bigscience/T0_3B” on a single GPU:
$ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 1 GPU per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1
15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
So you can fit it on a single 80GB GPU and no CPU offload, or a tiny 8GB GPU but then need ~60GB of CPU memory. (Remember this is just the memory for params, optimizer states and gradients - you will need a bit more memory for cuda kernels, activations and temps.)
Then it’s a tradeoff of cost vs speed. It’ll be cheaper to buy/rent a smaller GPU (or less GPUs since you can use multiple GPUs with Deepspeed ZeRO. But then it’ll be slower, so even if you don’t care about how fast something will be done, the slowdown has a direct impact on the duration of using the GPU and thus bigger cost. So experiment and compare which works the best.
If you have enough GPU memory make sure to disable the CPU/NVMe offload as it’ll make everything faster.
For example, let’s repeat the same for 2 GPUs:
$ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1
31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0
So here you’d want 2x 32GB GPUs or higher without offloading to CPU.
For full information please see memory estimators.
Filing Issues
Here is how to file an issue so that we could quickly get to the bottom of the issue and help you to unblock your work.
In your report please always include:
the full Deepspeed config file in the report
either the command line arguments if you were using the Trainer or TrainingArguments arguments if you were scripting the Trainer setup yourself. Please do not dump the TrainingArguments as it has dozens of entries that are irrelevant.
Output of:
python -c 'import torch; print(f"torch: {torch.__version__}")'
python -c 'import transformers; print(f"transformers: {transformers.__version__}")'
python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")'
If possible include a link to a Google Colab notebook that we can reproduce the problem with. You can use this notebook as a starting point.
Unless it’s impossible please always use a standard dataset that we can use and not something custom.
If possible try to use one of the existing examples to reproduce the problem with.
Things to consider:
Deepspeed is often not the cause of the problem.
Some of the filed issues proved to be Deepspeed-unrelated. That is once Deepspeed was removed from the setup, the problem was still there.
Therefore, if it’s not absolutely obvious it’s a DeepSpeed-related problem, as in you can see that there is an exception and you can see that DeepSpeed modules are involved, first re-test your setup without DeepSpeed in it. And only if the problem persists then do mentioned Deepspeed and supply all the required details.
If it’s clear to you that the issue is in the DeepSpeed core and not the integration part, please file the Issue directly with Deepspeed. If you aren’t sure, please do not worry, either Issue tracker will do, we will figure it out once you posted it and redirect you to another Issue tracker if need be.
Troubleshooting
the deepspeed process gets killed at startup without a traceback
If the deepspeed process gets killed at launch time without a traceback, that usually means that the program tried to allocate more CPU memory than your system has or your process is allowed to allocate and the OS kernel killed that process. This is because your configuration file most likely has either offload_optimizer or offload_param or both configured to offload to cpu. If you have NVMe, experiment with offloading to NVMe if you’re running under ZeRO-3. Here is how you can estimate how much memory is needed for a specific model.
training and/or eval/predict loss is NaN
This often happens when one takes a model pre-trained in bf16 mixed precision mode and tries to use it under fp16 (with or without mixed precision). Most models trained on TPU and often the ones released by Google are in this category (e.g. almost all t5-based models). Here the solution is to either use fp32 or bf16 if your hardware supports it (TPU, Ampere GPUs or newer).
The other problem may have to do with using fp16. When you configure this section:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
and you see in your log that Deepspeed reports OVERFLOW! as follows:
0
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144
1
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0
1
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
14
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
[...]
that means that the Deepspeed loss scaler can’t figure out a scaling co-efficient that overcomes loss overflow.
(the log was massaged to be more readable here.)
In this case you usually need to raise the value of initial_scale_power. Setting it to "initial_scale_power": 32 will typically resolve the problem.
Notes
DeepSpeed works with the PyTorch Trainer but not TF TFTrainer.
While DeepSpeed has a pip installable PyPI package, it is highly recommended that it gets installed from source to best match your hardware and also if you need to enable certain features, like 1-bit Adam, which aren’t available in the pypi distribution.
You don’t have to use the Trainer to use DeepSpeed with 🤗 Transformers - you can use any model with your own trainer, and you will have to adapt the latter according to the DeepSpeed integration instructions.
Non-Trainer Deepspeed Integration
The HfDeepSpeedConfig is used to integrate Deepspeed into the 🤗 Transformers core functionality, when Trainer is not used. The only thing that it does is handling Deepspeed ZeRO-3 param gathering and automatically splitting the model onto multiple gpus during from_pretrained call. Everything else you have to do by yourself.
When using Trainer everything is automatically taken care of.
When not using Trainer, to efficiently deploy DeepSpeed ZeRO-3, you must instantiate the HfDeepSpeedConfig object before instantiating the model and keep that object alive.
If you’re using Deepspeed ZeRO-1 or ZeRO-2 you don’t need to use HfDeepSpeedConfig at all.
For example for a pretrained model:
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed
ds_config = {...}
dschf = HfDeepSpeedConfig(ds_config)
model = AutoModel.from_pretrained("gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
or for non-pretrained model:
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel, AutoConfig
import deepspeed
ds_config = {...}
dschf = HfDeepSpeedConfig(ds_config)
config = AutoConfig.from_pretrained("gpt2")
model = AutoModel.from_config(config)
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
Please note that if you’re not using the Trainer integration, you’re completely on your own. Basically follow the documentation on the Deepspeed website. Also you have to configure explicitly the config file - you can’t use "auto" values and you will have to put real values instead.
HfDeepSpeedConfig
class transformers.integrations.HfDeepSpeedConfig
< source >
( config_file_or_dict )
Parameters
config_file_or_dict (Union[str, Dict]) — path to DeepSpeed config file or dict.
This object contains a DeepSpeed configuration dictionary and can be quickly queried for things like zero stage.
A weakref of this object is stored in the module’s globals to be able to access the config from areas where things like the Trainer object is not available (e.g. from_pretrained and _get_resized_embeddings). Therefore it’s important that this object remains alive while the program is still running.
Trainer uses the HfTrainerDeepSpeedConfig subclass instead. That subclass has logic to sync the configuration with values of TrainingArguments by replacing special placeholder values: "auto". Without this special logic the DeepSpeed configuration is not modified in any way.
Custom DeepSpeed ZeRO Inference
Here is an example of how one could do DeepSpeed ZeRO Inference without using Trainer when one can’t fit a model onto a single GPU. The solution includes using additional GPUs or/and offloading GPU memory to CPU memory.
The important nuance to understand here is that the way ZeRO is designed you can process different inputs on different GPUs in parallel.
The example has copious notes and is self-documenting.
Make sure to:
disable CPU offload if you have enough GPU memory (since it slows things down)
enable bf16 if you own an Ampere or a newer GPU to make things faster. If you don’t have that hardware you may enable fp16 as long as you don’t use any model that was pre-trained in bf16 mixed precision (such as most t5 models). These usually overflow in fp16 and you will see garbage as output.
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.integrations import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false"
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
train_batch_size = 1 * world_size
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
dschf = HfDeepSpeedConfig(ds_config)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval()
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
Let’s save it as t0.py and run it:
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
This was a very basic example and you will want to adapt it to your needs.
generate nuances
When using multiple GPUs with ZeRO Stage-3, one has to synchronize the GPUs by calling generate(..., synced_gpus=True). If this is not done if one GPU finished generating before other GPUs the whole system will hang as the rest of the GPUs will not be able to received the shard of weights from the GPU that stopped generating.
Starting from transformers>=4.28, if synced_gpus isn’t explicitly specified, it’ll be set to True automatically if these conditions are detected. But you can still override the value of synced_gpus if need to.
Testing Deepspeed Integration
If you submit a PR that involves DeepSpeed integration please note our CircleCI PR CI setup has no GPUs, so we only run tests requiring gpus on a different CI nightly. Therefore if you get a green CI report in your PR it doesn’t mean DeepSpeed tests pass.
To run DeepSpeed tests, please run at least:
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
If you changed any of the modeling or pytorch examples code, then run the model zoo tests as well. The following will run all DeepSpeed tests:
RUN_SLOW=1 pytest tests/deepspeed
Main DeepSpeed Resources
Project’s github
Usage docs
API docs
Blog posts
Papers:
ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
ZeRO-Offload: Democratizing Billion-Scale Model Training
ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning
Finally, please, remember that, HuggingFace Trainer only integrates DeepSpeed, therefore if you have any problems or questions with regards to DeepSpeed usage, please, file an issue with DeepSpeed GitHub. |
https://huggingface.co/thegovind | 12
Govind K
thegovind
thegovind
Research interests
None yet
Organizations
spaces 4
pinned
3
⛓️
LangFlow
Build error
🐠
Segment Anything
Stopped
📊
Reddog Pill Training
Build error
💻
Reddog Sandbox
models 2
thegovind/reddogpillmodel512
Text-to-Image • Updated Dec 12, 2022 • 533 • 1
thegovind/pills1testmodel
Text-to-Image • Updated Dec 9, 2022 • 427
datasets 1
thegovind/llamav2-instruct-miyagi
Viewer • Updated Jul 27 • 3 |
https://huggingface.co/docs/transformers/internal/tokenization_utils | Most of those are only useful if you are studying the code of the tokenizers in the library.
class transformers.PreTrainedTokenizerBase
< source >
( **kwargs )
Parameters
model_max_length (int, optional) — The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).
padding_side (str, optional) — The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
truncation_side (str, optional) — The side on which the model should have truncation applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name.
chat_template (str, optional) — A Jinja template string that will be used to format lists of chat messages. See https://huggingface.co/docs/transformers/chat_templating for a full description.
model_input_names (List[string], optional) — The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name.
bos_token (str or tokenizers.AddedToken, optional) — A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id.
eos_token (str or tokenizers.AddedToken, optional) — A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id.
unk_token (str or tokenizers.AddedToken, optional) — A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id.
sep_token (str or tokenizers.AddedToken, optional) — A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id.
pad_token (str or tokenizers.AddedToken, optional) — A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id.
cls_token (str or tokenizers.AddedToken, optional) — A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id.
mask_token (str or tokenizers.AddedToken, optional) — A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id.
additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) — A tuple or a list of additional special tokens. Add them here to ensure they are skipped when decoding with skip_special_tokens is set to True. If they are not part of the vocabulary, they will be added at the end of the vocabulary.
clean_up_tokenization_spaces (bool, optional, defaults to True) — Whether or not the model should cleanup the spaces that were added when splitting the input text during the tokenization process.
split_special_tokens (bool, optional, defaults to False) — Whether or not the special tokens should be split during the tokenization process. The default behavior is to not split special tokens. This means that if <s> is the bos_token, then tokenizer.tokenize("<s>") = ['<s>]. Otherwise, if split_special_tokens=True, then tokenizer.tokenize("<s>") will be give ['<', 's', '>']. This argument is only supported for slow tokenizers for the moment.
Base class for PreTrainedTokenizer and PreTrainedTokenizerFast.
Handles shared (mostly boiler plate) methods for those two classes.
Class attributes (overridden by derived classes)
vocab_files_names (Dict[str, str]) — A dictionary with, as keys, the __init__ keyword name of each vocabulary file required by the model, and as associated values, the filename for saving the associated file (string).
pretrained_vocab_files_map (Dict[str, Dict[str, str]]) — A dictionary of dictionaries, with the high-level keys being the __init__ keyword name of each vocabulary file required by the model, the low-level being the short-cut-names of the pretrained models with, as associated values, the url to the associated pretrained vocabulary file.
max_model_input_sizes (Dict[str, Optional[int]]) — A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, the maximum length of the sequence inputs of this model, or None if the model has no maximum input size.
pretrained_init_configuration (Dict[str, Dict[str, Any]]) — A dictionary with, as keys, the short-cut-names of the pretrained models, and as associated values, a dictionary of specific arguments to pass to the __init__ method of the tokenizer class for this pretrained model when loading the tokenizer with the from_pretrained() method.
model_input_names (List[str]) — A list of inputs expected in the forward pass of the model.
padding_side (str) — The default value for the side on which the model should have padding applied. Should be 'right' or 'left'.
truncation_side (str) — The default value for the side on which the model should have truncation applied. Should be 'right' or 'left'.
__call__
< source >
( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
apply_chat_template
< source >
( conversation: typing.Union[typing.List[typing.Dict[str, str]], ForwardRef('Conversation')] chat_template: typing.Optional[str] = None tokenize: bool = True padding: bool = False truncation: bool = False max_length: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **tokenizer_kwargs ) → List[int]
Parameters
conversation (Union[List[Dict[str, str]], “Conversation”]) — A Conversation object or list of dicts with “role” and “content” keys, representing the chat history so far.
chat_template (str, optional) — A Jinja template to use for this conversion. If this is not passed, the model’s default chat template will be used instead.
tokenize (bool, defaults to True) — Whether to tokenize the output. If False, the output will be a string.
padding (bool, defaults to False) — Whether to pad sequences to the maximum length. Has no effect if tokenize is False.
truncation (bool, defaults to False) — Whether to truncate sequences at the maximum length. Has no effect if tokenize is False.
max_length (int, optional) — Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is False. If not specified, the tokenizer’s max_length attribute will be used as a default.
return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Has no effect if tokenize is False. Acceptable values are:
'tf': Return TensorFlow tf.Tensor objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return NumPy np.ndarray objects.
'jax': Return JAX jnp.ndarray objects. **tokenizer_kwargs — Additional kwargs to pass to the tokenizer.
A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate().
Converts a Conversation object or a list of dictionaries with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting. When chat_template is None, it will fall back to the default_chat_template specified at the class level.
Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to sequence-to-sequence models that need a slightly different processing for the labels.
batch_decode
< source >
( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → List[str]
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
batch_encode_plus
< source >
( batch_text_or_text_pairs: typing.Union[typing.List[str], typing.List[typing.Tuple[str, str]], typing.List[typing.List[str]], typing.List[typing.Tuple[typing.List[str], typing.List[str]]], typing.List[typing.List[int]], typing.List[typing.Tuple[typing.List[int], typing.List[int]]]] add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
batch_text_or_text_pairs (List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], and for not-fast tokenizers, also List[List[int]], List[Tuple[List[int], List[int]]]) — Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see details in encode_plus).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
This method is deprecated, __call__ should be used instead.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
The model input with special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
clean_up_tokenization
< source >
( out_string: str ) → str
Parameters
out_string (str) — The text to clean up.
The cleaned-up string.
Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.
convert_tokens_to_string
< source >
( tokens: typing.List[str] ) → str
Parameters
tokens (List[str]) — The token to join in a string.
The joined tokens.
Converts a sequence of tokens in a single string. The most simple way to do it is " ".join(tokens) but we often want to remove sub-word tokenization artifacts at the same time.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type IDs?
Should be overridden in a subclass if the model has a special way of building those.
decode
< source >
( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None **kwargs ) → str
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific decode method.
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
encode
< source >
( text: typing.Union[str, typing.List[str], typing.List[int]] text_pair: typing.Union[str, typing.List[str], typing.List[int], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → List[int], torch.Tensor, tf.Tensor or np.ndarray
Parameters
text (str, List[str] or List[int]) — The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
text_pair (str, List[str] or List[int], optional) — Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
**kwargs — Passed along to the .tokenize() method.
Returns
List[int], torch.Tensor, tf.Tensor or np.ndarray
The tokenized ids of the text.
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing self.convert_tokens_to_ids(self.tokenize(text)).
encode_plus
< source >
( text: typing.Union[str, typing.List[str], typing.List[int]] text_pair: typing.Union[str, typing.List[str], typing.List[int], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding
Parameters
text (str, List[str] or List[int] (the latter only for not-fast tokenizers)) — The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
text_pair (str, List[str] or List[int], optional) — Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method).
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Tokenize and prepare for the model a sequence or a pair of sequences.
This method is deprecated, __call__ should be used instead.
from_pretrained
< source >
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] *init_inputs cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — Can be either:
A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.
(Deprecated, not applicable to all derived classes) A path or url to a single saved vocabulary file (if and only if the tokenizer only requires a single vocabulary file like Bert or XLNet), e.g., ./my_model_directory/vocab.txt.
cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.
force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download the vocabulary files and override the cached versions if they exist.
resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Attempt to resume the download if such a file exists.
proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
local_files_only (bool, optional, defaults to False) — Whether or not to only rely on local files and not to attempt to download any files.
revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
subfolder (str, optional) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here.
inputs (additional positional arguments, optional) — Will be passed along to the Tokenizer __init__ method.
kwargs (additional keyword arguments, optional) — Will be passed to the Tokenizer __init__ method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the __init__ for more details.
Instantiate a PreTrainedTokenizerBase (or a derived class) from a predefined tokenizer.
Passing token=True is required when you want to use a private model.
Examples:
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
tokenizer = BertTokenizer.from_pretrained("./test/saved_model/")
tokenizer = BertTokenizer.from_pretrained("./test/saved_model/my_vocab.txt")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", unk_token="<unk>")
assert tokenizer.unk_token == "<unk>"
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → A list of integers in the range [0, 1]
Parameters
token_ids_0 (List[int]) — List of ids of the first sequence.
token_ids_1 (List[int], optional) — List of ids of the second sequence.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.
Returns the vocabulary as a dictionary of token to index.
tokenizer.get_vocab()[token] is equivalent to tokenizer.convert_tokens_to_ids(token) when token is in the vocab.
pad
< source >
( encoded_inputs: typing.Union[transformers.tokenization_utils_base.BatchEncoding, typing.List[transformers.tokenization_utils_base.BatchEncoding], typing.Dict[str, typing.List[int]], typing.Dict[str, typing.List[typing.List[int]]], typing.List[typing.Dict[str, typing.List[int]]]] padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None return_attention_mask: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None verbose: bool = True )
Parameters
encoded_inputs (BatchEncoding, list of BatchEncoding, Dict[str, List[int]], Dict[str, List[List[int]] or List[Dict[str, List[int]]]) — Tokenized inputs. Can represent one input (BatchEncoding or Dict[str, List[int]]) or a batch of tokenized inputs (list of BatchEncoding, Dict[str, List[List[int]]] or List[Dict[str, List[int]]]) so you can use this method during preprocessing as well as in a PyTorch Dataloader collate function.
Instead of List[int] you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors), see the note above for the return type.
padding (bool, str or PaddingStrategy, optional, defaults to True) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings.
Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level (with self.padding_side, self.pad_token_id and self.pad_token_type_id).
Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding.
If the encoded_inputs passed are dictionary of numpy arrays, PyTorch tensors or TensorFlow tensors, the result will use the same type unless you provide a different tensor type with return_tensors. In the case of PyTorch tensors, you will lose the specific device of your tensors however.
prepare_for_model
< source >
( ids: typing.List[int] pair_ids: typing.Optional[typing.List[int]] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True prepend_batch_axis: bool = False **kwargs ) → BatchEncoding
Parameters
ids (List[int]) — Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
pair_ids (List[int], optional) — Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
add_special_tokens (bool, optional, defaults to True) — Whether or not to add special tokens when encoding the sequences. This will use the underlying PretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is usefull if you want to add bos or eos tokens automatically.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) — If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) — Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) — Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) — Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) — Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) — Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) — Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) — Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) — Whether or not to print more information and warnings. **kwargs — passed to the self.tokenize() method
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens. Please Note, for pair_ids different than None and truncation_strategy = longest_first or True, it is not possible to return overflowing tokens. Such a combination of arguments will raise an error.
prepare_seq2seq_batch
< source >
( src_texts: typing.List[str] tgt_texts: typing.Optional[typing.List[str]] = None max_length: typing.Optional[int] = None max_target_length: typing.Optional[int] = None padding: str = 'longest' return_tensors: str = None truncation: bool = True **kwargs ) → BatchEncoding
Parameters
src_texts (List[str]) — List of documents to summarize or source language texts.
tgt_texts (list, optional) — List of summaries or target language texts.
max_length (int, optional) — Controls the maximum length for encoder inputs (documents to summarize or source language texts) If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
max_target_length (int, optional) — Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set to None, this will use the max_length value.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
truncation (bool, str or TruncationStrategy, optional, defaults to True) — Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). **kwargs — Additional keyword arguments passed along to self.__call__.
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to the encoder.
attention_mask — List of indices specifying which tokens should be attended to by the model.
labels — List of token ids for tgt_texts.
The full set of keys [input_ids, attention_mask, labels], will only be returned if tgt_texts is passed. Otherwise, input_ids, attention_mask will be the only keys.
Prepare model inputs for translation. For best performance, translate one sentence at a time.
push_to_hub
< source >
( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs )
Parameters
repo_id (str) — The name of the repository you want to push your tokenizer to. It should contain your organization name when pushing to a given organization.
use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
commit_message (str, optional) — Message to commit while pushing. Will default to "Upload tokenizer".
private (bool, optional) — Whether or not the repository created should be private.
token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB").
create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization.
revision (str, optional) — Branch to push the uploaded files to.
Upload the tokenizer files to the 🤗 Model Hub.
Examples:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenizer.push_to_hub("my-finetuned-bert")
tokenizer.push_to_hub("huggingface/my-finetuned-bert")
register_for_auto_class
< source >
( auto_class = 'AutoTokenizer' )
Parameters
auto_class (str or type, optional, defaults to "AutoTokenizer") — The auto class to register this new tokenizer with.
Register this class with a given auto class. This should only be used for custom tokenizers as the ones in the library are already mapped with AutoTokenizer.
This API is experimental and may have some slight breaking changes in the next releases.
save_pretrained
< source >
( save_directory: typing.Union[str, os.PathLike] legacy_format: typing.Optional[bool] = None filename_prefix: typing.Optional[str] = None push_to_hub: bool = False **kwargs ) → A tuple of str
Parameters
save_directory (str or os.PathLike) — The path to a directory where the tokenizer will be saved.
legacy_format (bool, optional) — Only applicable for a fast tokenizer. If unset (default), will save the tokenizer in the unified JSON format as well as in legacy format if it exists, i.e. with tokenizer specific vocabulary and a separate added_tokens files.
If False, will only save the tokenizer in the unified JSON format. This format is incompatible with “slow” tokenizers (not powered by the tokenizers library), so the tokenizer will not be able to be loaded in the corresponding “slow” tokenizer.
If True, will save the tokenizer in legacy format. If the “slow” tokenizer doesn’t exits, a value error is raised.
filename_prefix (str, optional) — A prefix to add to the names of the files saved by the tokenizer.
push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method.
The files saved.
Save the full tokenizer state.
This method make sure the full tokenizer can then be re-loaded using the ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained class method..
Warning,None This won’t save modifications you may have applied to the tokenizer after the instantiation (for instance, modifying tokenizer.do_lower_case after creation).
save_vocabulary
< source >
( save_directory: str filename_prefix: typing.Optional[str] = None ) → Tuple(str)
Parameters
save_directory (str) — The directory in which to save the vocabulary.
filename_prefix (str, optional) — An optional prefix to add to the named of the saved files.
Paths to the files saved.
Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use _save_pretrained() to save the whole state of the tokenizer.
tokenize
< source >
( text: str pair: typing.Optional[str] = None add_special_tokens: bool = False **kwargs ) → List[str]
Parameters
text (str) — The sequence to be encoded.
pair (str, optional) — A second sequence to be encoded with the first.
add_special_tokens (bool, optional, defaults to False) — Whether or not to add the special tokens associated with the corresponding model.
kwargs (additional keyword arguments, optional) — Will be passed to the underlying model specific encode method. See details in call()
The list of tokens.
Converts a string in a sequence of tokens, replacing unknown tokens with the unk_token.
truncate_sequences
< source >
( ids: typing.List[int] pair_ids: typing.Optional[typing.List[int]] = None num_tokens_to_remove: int = 0 truncation_strategy: typing.Union[str, transformers.tokenization_utils_base.TruncationStrategy] = 'longest_first' stride: int = 0 ) → Tuple[List[int], List[int], List[int]]
Parameters
ids (List[int]) — Tokenized input ids of the first sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
pair_ids (List[int], optional) — Tokenized input ids of the second sequence. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.
num_tokens_to_remove (int, optional, defaults to 0) — Number of tokens to remove using the truncation strategy.
truncation_strategy (str or TruncationStrategy, optional, defaults to False) — The strategy to follow for truncation. Can be:
'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
stride (int, optional, defaults to 0) — If set to a positive number, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
Returns
Tuple[List[int], List[int], List[int]]
The truncated ids, the truncated pair_ids and the list of overflowing tokens. Note: The longest_first strategy returns empty list of overflowing tokens if a pair of sequences (or a batch of pairs) is provided.
Truncates a sequence pair in-place following the strategy.
class transformers.SpecialTokensMixin
< source >
( verbose = True **kwargs )
Parameters
bos_token (str or tokenizers.AddedToken, optional) — A special token representing the beginning of a sentence.
eos_token (str or tokenizers.AddedToken, optional) — A special token representing the end of a sentence.
unk_token (str or tokenizers.AddedToken, optional) — A special token representing an out-of-vocabulary token.
sep_token (str or tokenizers.AddedToken, optional) — A special token separating two different sentences in the same input (used by BERT for instance).
pad_token (str or tokenizers.AddedToken, optional) — A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation.
cls_token (str or tokenizers.AddedToken, optional) — A special token representing the class of the input (used by BERT for instance).
mask_token (str or tokenizers.AddedToken, optional) — A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT).
additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) — A tuple or a list of additional tokens, which will be marked as special, meaning that they will be skipped when decoding if skip_special_tokens is set to True.
A mixin derived by PreTrainedTokenizer and PreTrainedTokenizerFast to handle specific behaviors related to special tokens. In particular, this class hold the attributes which can be used to directly access these special tokens in a model-independent manner and allow to set and update the special tokens.
add_special_tokens
< source >
( special_tokens_dict: typing.Dict[str, typing.Union[str, tokenizers.AddedToken]] replace_additional_special_tokens = True ) → int
Parameters
special_tokens_dict (dictionary str to str or tokenizers.AddedToken) — Keys should be in the list of predefined special attributes: [bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens].
Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the unk_token to them).
replace_additional_special_tokens (bool, optional,, defaults to True) — If True, the existing list of additional special tokens will be replaced by the list provided in special_tokens_dict. Otherwise, self._additional_special_tokens is just extended. In the former case, the tokens will NOT be removed from the tokenizer’s full vocabulary - they are only being flagged as non-special tokens. Remember, this only affects which tokens are skipped during decoding, not the added_tokens_encoder and added_tokens_decoder. This means that the previous additional_special_tokens are still added tokens, and will not be split by the model.
Number of tokens added to the vocabulary.
Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the resize_token_embeddings() method.
Using add_special_tokens will ensure your special tokens can be used in several ways:
Special tokens can be skipped when decoding using skip_special_tokens = True.
Special tokens are carefully handled by the tokenizer (they are never split), similar to AddedTokens.
You can easily refer to special tokens using tokenizer class attributes like tokenizer.cls_token. This makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (for instance BertTokenizer cls_token is already registered to be :obj’[CLS]’ and XLM’s one is also registered to be '</s>').
Examples:
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2Model.from_pretrained("gpt2")
special_tokens_dict = {"cls_token": "<CLS>"}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print("We have added", num_added_toks, "tokens")
model.resize_token_embeddings(len(tokenizer))
assert tokenizer.cls_token == "<CLS>"
add_tokens
< source >
( new_tokens: typing.Union[str, tokenizers.AddedToken, typing.List[typing.Union[str, tokenizers.AddedToken]]] special_tokens: bool = False ) → int
Parameters
new_tokens (str, tokenizers.AddedToken or a list of str or tokenizers.AddedToken) — Tokens are only added if they are not already in the vocabulary. tokenizers.AddedToken wraps a string token to let you personalize its behavior: whether this token should only match against a single word, whether this token should strip all potential whitespaces on the left side, whether this token should strip all potential whitespaces on the right side, etc.
special_tokens (bool, optional, defaults to False) — Can be used to specify if the token is a special token. This mostly change the normalization behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance).
See details for tokenizers.AddedToken in HuggingFace tokenizers library.
Number of tokens added to the vocabulary.
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.
Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the resize_token_embeddings() method.
Examples:
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
model = BertModel.from_pretrained("bert-base-uncased")
num_added_toks = tokenizer.add_tokens(["new_tok1", "my_new-tok2"])
print("We have added", num_added_toks, "tokens")
model.resize_token_embeddings(len(tokenizer))
The sanitize_special_tokens is now deprecated kept for backward compatibility and will be removed in transformers v5.
class transformers.tokenization_utils_base.TruncationStrategy
< source >
( value names = None module = None qualname = None type = None start = 1 )
Possible values for the truncation argument in PreTrainedTokenizerBase.call(). Useful for tab-completion in an IDE.
class transformers.CharSpan
< source >
( start: int end: int )
Parameters
start (int) — Index of the first character in the original string.
end (int) — Index of the character following the last character in the original string.
Character span in the original string.
class transformers.TokenSpan
< source >
( start: int end: int )
Parameters
start (int) — Index of the first token in the span.
end (int) — Index of the token following the last token in the span.
Token span in an encoded string (list of tokens). |
https://huggingface.co/docs/transformers/main_classes/trainer | Trainer
The Trainer class provides an API for feature-complete training in PyTorch for most standard use cases. It’s used in most of the example scripts.
Before instantiating your Trainer, create a TrainingArguments to access all the points of customization during training.
The API supports distributed training on multiple GPUs/TPUs, mixed precision through NVIDIA Apex and Native AMP for PyTorch.
The Trainer contains the basic training loop which supports the above features. To inject custom behavior you can subclass them and override the following methods:
get_train_dataloader — Creates the training DataLoader.
get_eval_dataloader — Creates the evaluation DataLoader.
get_test_dataloader — Creates the test DataLoader.
log — Logs information on the various objects watching training.
create_optimizer_and_scheduler — Sets up the optimizer and learning rate scheduler if they were not passed at init. Note, that you can also subclass or override the create_optimizer and create_scheduler methods separately.
create_optimizer — Sets up the optimizer if it wasn’t passed at init.
create_scheduler — Sets up the learning rate scheduler if it wasn’t passed at init.
compute_loss - Computes the loss on a batch of training inputs.
training_step — Performs a training step.
prediction_step — Performs an evaluation/test step.
evaluate — Runs an evaluation loop and returns metrics.
predict — Returns predictions (with metrics if labels are available) on a test set.
The Trainer class is optimized for 🤗 Transformers models and can have surprising behaviors when you use it on other models. When using it on your own model, make sure:
your model always return tuples or subclasses of ModelOutput.
your model can compute the loss if a labels argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples)
your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named "label".
Here is an example of how to customize Trainer to use a weighted loss (useful when you have an unbalanced training set):
from torch import nn
from transformers import Trainer
class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
logits = outputs.get("logits")
loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device))
loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1))
return (loss, outputs) if return_outputs else loss
Another way to customize the training loop behavior for the PyTorch Trainer is to use callbacks that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms…) and take decisions (like early stopping).
Trainer
class transformers.Trainer
< source >
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None args: TrainingArguments = None data_collator: typing.Optional[DataCollator] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], NoneType] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None model_init: typing.Union[typing.Callable[[], transformers.modeling_utils.PreTrainedModel], NoneType] = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None )
Parameters
model (PreTrainedModel or torch.nn.Module, optional) — The model to train, evaluate or use for predictions. If not provided, a model_init must be passed.
Trainer is optimized to work with the PreTrainedModel provided by the library. You can still use your own models defined as torch.nn.Module as long as they work the same way as the 🤗 Transformers models.
args (TrainingArguments, optional) — The arguments to tweak for training. Will default to a basic instance of TrainingArguments with the output_dir set to a directory named tmp_trainer in the current directory if not provided.
data_collator (DataCollator, optional) — The function to use to form a batch from a list of elements of train_dataset or eval_dataset. Will default to default_data_collator() if no tokenizer is provided, an instance of DataCollatorWithPadding otherwise.
train_dataset (torch.utils.data.Dataset or torch.utils.data.IterableDataset, optional) — The dataset to use for training. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed.
Note that if it’s a torch.utils.data.IterableDataset with some randomization and you are training in a distributed fashion, your iterable dataset should either use a internal attribute generator that is a torch.Generator for the randomization that must be identical on all processes (and the Trainer will manually set the seed of this generator at each epoch) or have a set_epoch() method that internally sets the seed of the RNGs used.
eval_dataset (Union[torch.utils.data.Dataset, Dict[str, torch.utils.data.Dataset]), optional) — The dataset to use for evaluation. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed. If it is a dictionary, it will evaluate on each dataset prepending the dictionary key to the metric name.
tokenizer (PreTrainedTokenizerBase, optional) — The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs to the maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.
model_init (Callable[[], PreTrainedModel], optional) — A function that instantiates the model to be used. If provided, each call to train() will start from a new instance of the model as given by this function.
The function may have zero argument, or a single one containing the optuna/Ray Tune/SigOpt trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers, dropout probabilities etc).
compute_metrics (Callable[[EvalPrediction], Dict], optional) — The function that will be used to compute metrics at evaluation. Must take a EvalPrediction and return a dictionary string to metric values.
callbacks (List of TrainerCallback, optional) — A list of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in here.
If you want to remove one of the default callbacks used, use the Trainer.remove_callback() method.
optimizers (Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR], optional) — A tuple containing the optimizer and the scheduler to use. Will default to an instance of AdamW on your model and a scheduler given by get_linear_schedule_with_warmup() controlled by args.
preprocess_logits_for_metrics (Callable[[torch.Tensor, torch.Tensor], torch.Tensor], optional) — A function that preprocess the logits right before caching them at each evaluation step. Must take two tensors, the logits and the labels, and return the logits once processed as desired. The modifications made by this function will be reflected in the predictions received by compute_metrics.
Note that the labels (second parameter) will be None if the dataset does not have them.
Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for 🤗 Transformers.
Important attributes:
model — Always points to the core model. If using a transformers model, it will be a PreTrainedModel subclass.
model_wrapped — Always points to the most external model in case one or more other modules wrap the original model. This is the model that should be used for the forward pass. For example, under DeepSpeed, the inner model is wrapped in DeepSpeed and then again in torch.nn.DistributedDataParallel. If the inner model hasn’t been wrapped, then self.model_wrapped is the same as self.model.
is_model_parallel — Whether or not a model has been switched to a model parallel mode (different from data parallelism, this means some of the model layers are split on different GPUs).
place_model_on_device — Whether or not to automatically place the model on the device - it will be set to False if model parallel or deepspeed is used, or if the default TrainingArguments.place_model_on_device is overridden to return False .
is_in_train — Whether or not a model is currently running train (e.g. when evaluate is called while in train)
add_callback
< source >
( callback )
Parameters
callback (type or ~transformer.TrainerCallback) — A ~transformer.TrainerCallback class or an instance of a ~transformer.TrainerCallback. In the first case, will instantiate a member of that class.
Add a callback to the current list of ~transformer.TrainerCallback.
autocast_smart_context_manager
< source >
( cache_enabled: typing.Optional[bool] = True )
A helper wrapper that creates an appropriate context manager for autocast while feeding it the desired arguments, depending on the situation.
compute_loss
< source >
( model inputs return_outputs = False )
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
A helper wrapper to group together context managers.
create_model_card
< source >
( language: typing.Optional[str] = None license: typing.Optional[str] = None tags: typing.Union[str, typing.List[str], NoneType] = None model_name: typing.Optional[str] = None finetuned_from: typing.Optional[str] = None tasks: typing.Union[str, typing.List[str], NoneType] = None dataset_tags: typing.Union[str, typing.List[str], NoneType] = None dataset: typing.Union[str, typing.List[str], NoneType] = None dataset_args: typing.Union[str, typing.List[str], NoneType] = None )
Parameters
language (str, optional) — The language of the model (if applicable)
license (str, optional) — The license of the model. Will default to the license of the pretrained model used, if the original model given to the Trainer comes from a repo on the Hub.
tags (str or List[str], optional) — Some tags to be included in the metadata of the model card.
model_name (str, optional) — The name of the model.
finetuned_from (str, optional) — The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo of the original model given to the Trainer (if it comes from the Hub).
tasks (str or List[str], optional) — One or several task identifiers, to be included in the metadata of the model card.
dataset_tags (str or List[str], optional) — One or several dataset tags, to be included in the metadata of the model card.
dataset (str or List[str], optional) — One or several dataset identifiers, to be included in the metadata of the model card.
dataset_args (str or List[str], optional) — One or several dataset arguments, to be included in the metadata of the model card.
Creates a draft of a model card using the information available to the Trainer.
Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method in a subclass.
create_optimizer_and_scheduler
< source >
( num_training_steps: int )
Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method (or create_optimizer and/or create_scheduler) in a subclass.
create_scheduler
< source >
( num_training_steps: int optimizer: Optimizer = None )
Parameters
num_training_steps (int) — The number of training steps to do.
Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or passed as an argument.
evaluate
< source >
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Parameters
eval_dataset (Dataset, optional) — Pass a dataset if you wish to override self.eval_dataset. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement the __len__ method.
ignore_keys (List[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
metric_key_prefix (str, optional, defaults to "eval") — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is “eval” (default)
Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init compute_metrics argument).
You can also subclass and override this method to inject custom behavior.
evaluation_loop
< source >
( dataloader: DataLoader description: str prediction_loss_only: typing.Optional[bool] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().
Works both with or without labels.
floating_point_ops
< source >
( inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] ) → int
Parameters
inputs (Dict[str, Union[torch.Tensor, Any]]) — The inputs and targets of the model.
The number of floating-point operations.
For models that inherit from PreTrainedModel, uses that method to compute the number of floating point operations for every backward + forward pass. If using another model, either implement such a method in the model or subclass and override this method.
Get all parameter names that weight decay will be applied to
Note that some models implement their own layernorm instead of calling nn.LayerNorm, weight decay could still apply to those modules since this function only filter out instance of nn.LayerNorm
get_eval_dataloader
< source >
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None )
Parameters
eval_dataset (torch.utils.data.Dataset, optional) — If provided, will override self.eval_dataset. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement __len__.
Returns the evaluation ~torch.utils.data.DataLoader.
Subclass and override this method if you want to inject some custom behavior.
get_optimizer_cls_and_kwargs
< source >
( args: TrainingArguments )
Parameters
args (transformers.training_args.TrainingArguments) — The training arguments for the training session.
Returns the optimizer class and optimizer parameters based on the training arguments.
get_test_dataloader
< source >
( test_dataset: Dataset )
Parameters
test_dataset (torch.utils.data.Dataset, optional) — The test dataset to use. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement __len__.
Returns the test ~torch.utils.data.DataLoader.
Subclass and override this method if you want to inject some custom behavior.
Returns the training ~torch.utils.data.DataLoader.
Will use no sampler if train_dataset does not implement __len__, a random sampler (adapted to distributed training if necessary) otherwise.
Subclass and override this method if you want to inject some custom behavior.
hyperparameter_search
< source >
( hp_space: typing.Union[typing.Callable[[ForwardRef('optuna.Trial')], typing.Dict[str, float]], NoneType] = None compute_objective: typing.Union[typing.Callable[[typing.Dict[str, float]], float], NoneType] = None n_trials: int = 20 direction: typing.Union[str, typing.List[str]] = 'minimize' backend: typing.Union[ForwardRef('str'), transformers.trainer_utils.HPSearchBackend, NoneType] = None hp_name: typing.Union[typing.Callable[[ForwardRef('optuna.Trial')], str], NoneType] = None **kwargs ) → [trainer_utils.BestRun or List[trainer_utils.BestRun]]
Parameters
hp_space (Callable[["optuna.Trial"], Dict[str, float]], optional) — A function that defines the hyperparameter search space. Will default to default_hp_space_optuna() or default_hp_space_ray() or default_hp_space_sigopt() depending on your backend.
compute_objective (Callable[[Dict[str, float]], float], optional) — A function computing the objective to minimize or maximize from the metrics returned by the evaluate method. Will default to default_compute_objective().
n_trials (int, optional, defaults to 100) — The number of trial runs to test.
direction (str or List[str], optional, defaults to "minimize") — If it’s single objective optimization, direction is str, can be "minimize" or "maximize", you should pick "minimize" when optimizing the validation loss, "maximize" when optimizing one or several metrics. If it’s multi objectives optimization, direction is List[str], can be List of "minimize" and "maximize", you should pick "minimize" when optimizing the validation loss, "maximize" when optimizing one or several metrics.
backend (str or ~training_utils.HPSearchBackend, optional) — The backend to use for hyperparameter search. Will default to optuna or Ray Tune or SigOpt, depending on which one is installed. If all are installed, will default to optuna.
hp_name (Callable[["optuna.Trial"], str]], optional) — A function that defines the trial/run name. Will default to None.
kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to optuna.create_study or ray.tune.run. For more information see:
the documentation of optuna.create_study
the documentation of tune.run
the documentation of sigopt
Returns
[trainer_utils.BestRun or List[trainer_utils.BestRun]]
All the information about the best run or best runs for multi-objective optimization. Experiment summary can be found in run_summary attribute for Ray backend.
Launch an hyperparameter search using optuna or Ray Tune or SigOpt. The optimized quantity is determined by compute_objective, which defaults to a function returning the evaluation loss when no metric is provided, the sum of all metrics otherwise.
To use this method, you need to have provided a model_init when initializing your Trainer: we need to reinitialize the model at each new run. This is incompatible with the optimizers argument, so you need to subclass Trainer and override the method create_optimizer_and_scheduler() for custom optimizer/scheduler.
init_git_repo
< source >
( at_init: bool = False )
Parameters
at_init (bool, optional, defaults to False) — Whether this function is called before any training or not. If self.args.overwrite_output_dir is True and at_init is True, the path to the repo (which is self.args.output_dir) might be wiped out.
Initializes a git repo in self.args.hub_model_id.
This function is deprecated and will be removed in v4.34.0 of Transformers.
Initializes a git repo in self.args.hub_model_id.
Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process.
Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be True for one process).
log
< source >
( logs: typing.Dict[str, float] )
Parameters
logs (Dict[str, float]) — The values to log.
Log logs on the various objects watching training.
Subclass and override this method to inject custom behavior.
log_metrics
< source >
( split metrics )
Parameters
split (str) — Mode/split name: one of train, eval, test
metrics (Dict[str, float]) — The metrics returned from train/evaluate/predictmetrics: metrics dict
Log metrics in a specially formatted way
Under distributed environment this is done only for a process with rank 0.
Notes on memory reports:
In order to get memory usage report you need to install psutil. You can do that with pip install psutil.
Now when this method is run, you will see a report that will include: :
init_mem_cpu_alloc_delta = 1301MB
init_mem_cpu_peaked_delta = 154MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 1345MB
train_mem_cpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 693MB
train_mem_gpu_peaked_delta = 7MB
Understanding the reports:
the first segment, e.g., train__, tells you which stage the metrics are for. Reports starting with init_ will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the __init__ will be reported along with the eval_ metrics.
the third segment, is either cpu or gpu, tells you whether it’s the general RAM or the gpu0 memory metric.
*_alloc_delta - is the difference in the used/allocated memory counter between the end and the start of the stage - it can be negative if a function released more memory than it allocated.
*_peaked_delta - is any extra memory that was consumed and then freed - relative to the current allocated memory counter - it is never negative. When you look at the metrics of any stage you add up alloc_delta + peaked_delta and you know how much memory was needed to complete that stage.
The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more memory than the rest since it stores the gradient and optimizer states for all participating GPUS. Perhaps in the future these reports will evolve to measure those too.
The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the memory shared with other processes. It is important to note that it does not include swapped out memory, so the reports could be imprecise.
The CPU peak memory is measured using a sampling thread. Due to python’s GIL it may miss some of the peak memory if that thread didn’t get a chance to run when the highest memory was used. Therefore this report can be less than reality. Using tracemalloc would have reported the exact peak memory, but it doesn’t report memory allocations outside of python. So if some C++ CUDA extension allocated its own memory it won’t be reported. And therefore it was dropped in favor of the memory sampling approach, which reads the current process memory usage.
The GPU allocated and peak memory reporting is done with torch.cuda.memory_allocated() and torch.cuda.max_memory_allocated(). This metric reports only “deltas” for pytorch-specific allocations, as torch.cuda memory management system doesn’t track any memory allocated outside of pytorch. For example, the very first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory.
Note that this tracker doesn’t account for memory allocations outside of Trainer’s __init__, train, evaluate and predict calls.
Because evaluation calls may happen during train, we can’t handle nested invocations because torch.cuda.max_memory_allocated is a single counter, so if it gets reset by a nested eval call, train’s tracker will report incorrect info. If this pytorch issue gets resolved it will be possible to change this class to be re-entrant. Until then we will only track the outer level of train, evaluate and predict methods. Which means that if eval is called during train, it’s the latter that will account for its memory usage and that of the former.
This also means that if any other tool that is used along the Trainer calls torch.cuda.reset_peak_memory_stats, the gpu peak memory stats could be invalid. And the Trainer will disrupt the normal behavior of any such tools that rely on calling torch.cuda.reset_peak_memory_stats themselves.
For best performance you may want to consider turning the memory profiling off for production runs.
metrics_format
< source >
( metrics: typing.Dict[str, float] ) → metrics (Dict[str, float])
Parameters
metrics (Dict[str, float]) — The metrics returned from train/evaluate/predict
Returns
metrics (Dict[str, float])
The reformatted metrics
Reformat Trainer metrics values to a human-readable format
Helper to get number of samples in a ~torch.utils.data.DataLoader by accessing its dataset. When dataloader.dataset does not exist or has no length, estimates as best it can
num_tokens
< source >
( train_dl: DataLoader max_steps: typing.Optional[int] = None )
Helper to get number of tokens in a ~torch.utils.data.DataLoader by enumerating dataloader.
pop_callback
< source >
( callback ) → ~transformer.TrainerCallback
Parameters
callback (type or ~transformer.TrainerCallback) — A ~transformer.TrainerCallback class or an instance of a ~transformer.TrainerCallback. In the first case, will pop the first member of that class found in the list of callbacks.
Returns
~transformer.TrainerCallback
The callback removed, if found.
Remove a callback from the current list of ~transformer.TrainerCallback and returns it.
If the callback is not found, returns None (and no error is raised).
predict
< source >
( test_dataset: Dataset ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'test' )
Parameters
test_dataset (Dataset) — Dataset to run the predictions on. If it is an datasets.Dataset, columns not accepted by the model.forward() method are automatically removed. Has to implement the method __len__
ignore_keys (List[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
metric_key_prefix (str, optional, defaults to "test") — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “test_bleu” if the prefix is “test” (default)
Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in evaluate().
If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (np.ndarray): The predictions on test_dataset.
label_ids (np.ndarray, optional): The labels (if the dataset contained some).
metrics (Dict[str, float], optional): The potential dictionary of metrics (if the dataset contained labels).
prediction_loop
< source >
( dataloader: DataLoader description: str prediction_loss_only: typing.Optional[bool] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().
Works both with or without labels.
prediction_step
< source >
( model: Module inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] prediction_loss_only: bool ignore_keys: typing.Optional[typing.List[str]] = None ) → Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
Parameters
model (nn.Module) — The model to evaluate.
inputs (Dict[str, Union[torch.Tensor, Any]]) — The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument labels. Check your model’s documentation for all accepted arguments.
prediction_loss_only (bool) — Whether or not to return the loss only.
ignore_keys (List[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
Returns
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
A tuple with the loss, logits and labels (each being optional).
Perform an evaluation step on model using inputs.
Subclass and override to inject custom behavior.
push_to_hub
< source >
( commit_message: typing.Optional[str] = 'End of training' blocking: bool = True **kwargs )
Parameters
commit_message (str, optional, defaults to "End of training") — Message to commit while pushing.
blocking (bool, optional, defaults to True) — Whether the function should return only when the git push has finished.
kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to create_model_card().
Upload self.model and self.tokenizer to the 🤗 model hub on the repo self.args.hub_model_id.
remove_callback
< source >
( callback )
Parameters
callback (type or ~transformer.TrainerCallback) — A ~transformer.TrainerCallback class or an instance of a ~transformer.TrainerCallback. In the first case, will remove the first member of that class found in the list of callbacks.
Remove a callback from the current list of ~transformer.TrainerCallback.
save_metrics
< source >
( split metrics combined = True )
Parameters
split (str) — Mode/split name: one of train, eval, test, all
metrics (Dict[str, float]) — The metrics returned from train/evaluate/predict
combined (bool, optional, defaults to True) — Creates combined metrics by updating all_results.json with metrics of this call
Save metrics into a json file for that split, e.g. train_results.json.
Under distributed environment this is done only for a process with rank 0.
To understand the metrics please read the docstring of log_metrics(). The only difference is that raw unformatted numbers are saved in the current method.
save_model
< source >
( output_dir: typing.Optional[str] = None _internal_call: bool = False )
Will save the model, so you can reload it using from_pretrained().
Will only save from the main process.
Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model
Under distributed environment this is done only for a process with rank 0.
train
< source >
( resume_from_checkpoint: typing.Union[bool, str, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = None ignore_keys_for_eval: typing.Optional[typing.List[str]] = None **kwargs )
Parameters
resume_from_checkpoint (str or bool, optional) — If a str, local path to a saved checkpoint as saved by a previous instance of Trainer. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of Trainer. If present, training will resume from the model/optimizer/scheduler states loaded here.
trial (optuna.Trial or Dict[str, Any], optional) — The trial run or the hyperparameter dictionary for hyperparameter search.
ignore_keys_for_eval (List[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.
kwargs (Dict[str, Any], optional) — Additional keyword arguments used to hide deprecated arguments
Main training entry point.
training_step
< source >
( model: Module inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] ) → torch.Tensor
Parameters
model (nn.Module) — The model to train.
inputs (Dict[str, Union[torch.Tensor, Any]]) — The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument labels. Check your model’s documentation for all accepted arguments.
The tensor with training loss on this batch.
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Seq2SeqTrainer
class transformers.Seq2SeqTrainer
< source >
( model: typing.Union[ForwardRef('PreTrainedModel'), torch.nn.modules.module.Module] = None args: TrainingArguments = None data_collator: typing.Optional[ForwardRef('DataCollator')] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], NoneType] = None tokenizer: typing.Optional[ForwardRef('PreTrainedTokenizerBase')] = None model_init: typing.Union[typing.Callable[[], ForwardRef('PreTrainedModel')], NoneType] = None compute_metrics: typing.Union[typing.Callable[[ForwardRef('EvalPrediction')], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[ForwardRef('TrainerCallback')]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None )
evaluate
< source >
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' **gen_kwargs )
Parameters
eval_dataset (Dataset, optional) — Pass a dataset if you wish to override self.eval_dataset. If it is an Dataset, columns not accepted by the model.forward() method are automatically removed. It must implement the __len__ method.
ignore_keys (List[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
metric_key_prefix (str, optional, defaults to "eval") — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is "eval" (default)
max_length (int, optional) — The maximum target length to use when predicting with the generate method.
num_beams (int, optional) — Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs — Additional generate specific kwargs.
Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init compute_metrics argument).
You can also subclass and override this method to inject custom behavior.
predict
< source >
( test_dataset: Dataset ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'test' **gen_kwargs )
Parameters
test_dataset (Dataset) — Dataset to run the predictions on. If it is a Dataset, columns not accepted by the model.forward() method are automatically removed. Has to implement the method __len__
ignore_keys (List[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.
metric_key_prefix (str, optional, defaults to "eval") — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is "eval" (default)
max_length (int, optional) — The maximum target length to use when predicting with the generate method.
num_beams (int, optional) — Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs — Additional generate specific kwargs.
Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in evaluate().
If your predictions or labels have different sequence lengths (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
predictions (np.ndarray): The predictions on test_dataset.
label_ids (np.ndarray, optional): The labels (if the dataset contained some).
metrics (Dict[str, float], optional): The potential dictionary of metrics (if the dataset contained labels).
TrainingArguments
class transformers.TrainingArguments
< source >
( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False evaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: typing.Optional[str] = 'passive' log_level_replica: typing.Optional[str] = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: typing.Optional[bool] = False save_on_each_node: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[typing.List[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False sharded_ddp: typing.Union[typing.List[transformers.trainer_utils.ShardedDDPOption], str, NoneType] = '' fsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params: int = 0 fsdp_config: typing.Optional[str] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None deepspeed: typing.Optional[str] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Optional[typing.List[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: bool = False include_inputs_for_metrics: bool = False fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: typing.Optional[int] = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None dispatch_batches: typing.Optional[bool] = None include_tokens_per_second: typing.Optional[bool] = False )
Parameters
output_dir (str) — The output directory where the model predictions and checkpoints will be written.
overwrite_output_dir (bool, optional, defaults to False) — If True, overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory.
do_train (bool, optional, defaults to False) — Whether to run training or not. This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
do_eval (bool, optional) — Whether to run evaluation on the validation set or not. Will be set to True if evaluation_strategy is different from "no". This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
do_predict (bool, optional, defaults to False) — Whether to run predictions on the test set or not. This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
evaluation_strategy (str or IntervalStrategy, optional, defaults to "no") — The evaluation strategy to adopt during training. Possible values are:
"no": No evaluation is done during training.
"steps": Evaluation is done (and logged) every eval_steps.
"epoch": Evaluation is done at the end of each epoch.
prediction_loss_only (bool, optional, defaults to False) — When performing evaluation and generating predictions, only returns the loss.
per_device_train_batch_size (int, optional, defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training.
per_device_eval_batch_size (int, optional, defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for evaluation.
gradient_accumulation_steps (int, optional, defaults to 1) — Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples.
eval_accumulation_steps (int, optional) — Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/NPU/TPU before being moved to the CPU (faster but requires more memory).
eval_delay (float, optional) — Number of epochs or steps to wait for before the first evaluation can be performed, depending on the evaluation_strategy.
learning_rate (float, optional, defaults to 5e-5) — The initial learning rate for AdamW optimizer.
weight_decay (float, optional, defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in AdamW optimizer.
adam_beta1 (float, optional, defaults to 0.9) — The beta1 hyperparameter for the AdamW optimizer.
adam_beta2 (float, optional, defaults to 0.999) — The beta2 hyperparameter for the AdamW optimizer.
adam_epsilon (float, optional, defaults to 1e-8) — The epsilon hyperparameter for the AdamW optimizer.
max_grad_norm (float, optional, defaults to 1.0) — Maximum gradient norm (for gradient clipping).
num_train_epochs(float, optional, defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).
max_steps (int, optional, defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. In case of using a finite iterable dataset the training may stop before reaching the set number of steps when all data is exhausted
lr_scheduler_type (str or SchedulerType, optional, defaults to "linear") — The scheduler type to use. See the documentation of SchedulerType for all possible values.
warmup_ratio (float, optional, defaults to 0.0) — Ratio of total training steps used for a linear warmup from 0 to learning_rate.
warmup_steps (int, optional, defaults to 0) — Number of steps used for a linear warmup from 0 to learning_rate. Overrides any effect of warmup_ratio.
log_level (str, optional, defaults to passive) — Logger log level to use on the main process. Possible choices are the log levels as strings: ‘debug’, ‘info’, ‘warning’, ‘error’ and ‘critical’, plus a ‘passive’ level which doesn’t set anything and keeps the current log level for the Transformers library (which will be "warning" by default).
log_level_replica (str, optional, defaults to "warning") — Logger log level to use on replicas. Same choices as log_level”
log_on_each_node (bool, optional, defaults to True) — In multinode distributed training, whether to log using log_level once per node, or only on the main node.
logging_dir (str, optional) — TensorBoard log directory. Will default to *output_dir/runs/CURRENT_DATETIME_HOSTNAME*.
logging_strategy (str or IntervalStrategy, optional, defaults to "steps") — The logging strategy to adopt during training. Possible values are:
"no": No logging is done during training.
"epoch": Logging is done at the end of each epoch.
"steps": Logging is done every logging_steps.
logging_first_step (bool, optional, defaults to False) — Whether to log and evaluate the first global_step or not.
logging_steps (int or float, optional, defaults to 500) — Number of update steps between two logs if logging_strategy="steps". Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.
logging_nan_inf_filter (bool, optional, defaults to True) — Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead.
logging_nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model.
save_strategy (str or IntervalStrategy, optional, defaults to "steps") — The checkpoint save strategy to adopt during training. Possible values are:
"no": No save is done during training.
"epoch": Save is done at the end of each epoch.
"steps": Save is done every save_steps.
save_steps (int or float, optional, defaults to 500) — Number of updates steps before two checkpoint saves if save_strategy="steps". Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.
save_total_limit (int, optional) — If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir. When load_best_model_at_end is enabled, the “best” checkpoint according to metric_for_best_model will always be retained in addition to the most recent ones. For example, for save_total_limit=5 and load_best_model_at_end, the four last checkpoints will always be retained alongside the best model. When save_total_limit=1 and load_best_model_at_end, it is possible that two checkpoints are saved: the last one and the best one (if they are different).
save_safetensors (bool, optional, defaults to False) — Use safetensors saving and loading for state dicts instead of default torch.load and torch.save.
save_on_each_node (bool, optional, defaults to False) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one.
This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node.
use_cpu (bool, optional, defaults to False) — Whether or not to use cpu. If set to False, we will use cuda or mps device if available.
seed (int, optional, defaults to 42) — Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the ~Trainer.model_init function to instantiate the model if it has some randomly initialized parameters.
data_seed (int, optional) — Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as seed. This can be used to ensure reproducibility of data sampling, independent of the model seed.
jit_mode_eval (bool, optional, defaults to False) — Whether or not to use PyTorch jit trace for inference.
use_ipex (bool, optional, defaults to False) — Use Intel extension for PyTorch when it is available. IPEX installation.
bf16 (bool, optional, defaults to False) — Whether to use bf16 16-bit (mixed) precision training instead of 32-bit training. Requires Ampere or higher NVIDIA architecture or using CPU (use_cpu) or Ascend NPU. This is an experimental API and it may change.
fp16 (bool, optional, defaults to False) — Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training.
fp16_opt_level (str, optional, defaults to ‘O1’) — For fp16 training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.
fp16_backend (str, optional, defaults to "auto") — This argument is deprecated. Use half_precision_backend instead.
half_precision_backend (str, optional, defaults to "auto") — The backend to use for mixed precision training. Must be one of "auto", "cuda_amp", "apex", "cpu_amp". "auto" will use CPU/CUDA AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend.
bf16_full_eval (bool, optional, defaults to False) — Whether to use full bfloat16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values. This is an experimental API and it may change.
fp16_full_eval (bool, optional, defaults to False) — Whether to use full float16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values.
tf32 (bool, optional) — Whether to enable the TF32 mode, available in Ampere and newer GPU architectures. The default value depends on PyTorch’s version default of torch.backends.cuda.matmul.allow_tf32. For more details please refer to the TF32 documentation. This is an experimental API and it may change.
local_rank (int, optional, defaults to -1) — Rank of the process during distributed training.
ddp_backend (str, optional) — The backend to use for distributed training. Must be one of "nccl", "mpi", "ccl", "gloo", "hccl".
tpu_num_cores (int, optional) — When training on TPU, the number of TPU cores (automatically passed by launcher script).
dataloader_drop_last (bool, optional, defaults to False) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.
eval_steps (int or float, optional) — Number of update steps between two evaluations if evaluation_strategy="steps". Will default to the same value as logging_steps if not set. Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.
dataloader_num_workers (int, optional, defaults to 0) — Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.
past_index (int, optional, defaults to -1) — Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
run_name (str, optional) — A descriptor for the run. Typically used for wandb and mlflow logging.
disable_tqdm (bool, optional) — Whether or not to disable the tqdm progress bars and table of metrics produced by ~notebook.NotebookTrainingTracker in Jupyter Notebooks. Will default to True if the logging level is set to warn or lower (default), False otherwise.
remove_unused_columns (bool, optional, defaults to True) — Whether or not to automatically remove the columns unused by the model forward method.
(Note that this behavior is not implemented for TFTrainer yet.)
label_names (List[str], optional) — The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to the list of argument names accepted by the model that contain the word “label”, except if the model used is one of the XxxForQuestionAnswering in which case it will also include the ["start_positions", "end_positions"] keys.
load_best_model_at_end (bool, optional, defaults to False) — Whether or not to load the best model found during training at the end of training. When this option is enabled, the best checkpoint will always be saved. See save_total_limit for more.
When set to True, the parameters save_strategy needs to be the same as evaluation_strategy, and in the case it is “steps”, save_steps must be a round multiple of eval_steps.
metric_for_best_model (str, optional) — Use in conjunction with load_best_model_at_end to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix "eval_". Will default to "loss" if unspecified and load_best_model_at_end=True (to use the evaluation loss).
If you set this value, greater_is_better will default to True. Don’t forget to set it to False if your metric is better when lower.
greater_is_better (bool, optional) — Use in conjunction with load_best_model_at_end and metric_for_best_model to specify if better models should have a greater metric or not. Will default to:
True if metric_for_best_model is set to a value that isn’t "loss" or "eval_loss".
False if metric_for_best_model is not set, or set to "loss" or "eval_loss".
ignore_data_skip (bool, optional, defaults to False) — When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.
sharded_ddp (bool, str or list of ShardedDDPOption, optional, defaults to '') — Use Sharded DDP training from FairScale (in distributed training only). This is an experimental feature.
A list of options along the following:
"simple": to use first instance of sharded DDP released by fairscale (ShardedDDP) similar to ZeRO-2.
"zero_dp_2": to use the second instance of sharded DPP released by fairscale (FullyShardedDDP) in Zero-2 mode (with reshard_after_forward=False).
"zero_dp_3": to use the second instance of sharded DPP released by fairscale (FullyShardedDDP) in Zero-3 mode (with reshard_after_forward=True).
"offload": to add ZeRO-offload (only compatible with "zero_dp_2" and "zero_dp_3").
If a string is passed, it will be split on space. If a bool is passed, it will be converted to an empty list for False and ["simple"] for True.
fsdp (bool, str or list of FSDPOption, optional, defaults to '') — Use PyTorch Distributed Parallel Training (in distributed training only).
A list of options along the following:
"full_shard": Shard parameters, gradients and optimizer states.
"shard_grad_op": Shard optimizer states and gradients.
"offload": Offload parameters and gradients to CPUs (only compatible with "full_shard" and "shard_grad_op").
"auto_wrap": Automatically recursively wrap layers with FSDP using default_auto_wrap_policy.
fsdp_config (str or dict, optional) — Config to be used with fsdp (Pytorch Distributed Parallel Training). The value is either a location of deepspeed json config file (e.g., ds_config.json) or an already loaded json file as dict.
A List of config and its options:
min_num_params (int, optional, defaults to 0): FSDP’s minimum number of parameters for Default Auto Wrapping. (useful only when fsdp field is passed).
transformer_layer_cls_to_wrap (List[str], optional): List of transformer layer class names (case-sensitive) to wrap, e.g, BertLayer, GPTJBlock, T5Block … (useful only when fsdp flag is passed).
backward_prefetch (str, optional) FSDP’s backward prefetch mode. Controls when to prefetch next set of parameters (useful only when fsdp field is passed).
A list of options along the following:
"backward_pre" : Prefetches the next set of parameters before the current set of parameter’s gradient computation.
"backward_post" : This prefetches the next set of parameters after the current set of parameter’s gradient computation.
forward_prefetch (bool, optional, defaults to False) FSDP’s forward prefetch mode (useful only when fsdp field is passed). If "True", then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass.
limit_all_gathers (bool, optional, defaults to False) FSDP’s limit_all_gathers (useful only when fsdp field is passed). If "True", FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers.
use_orig_params (bool, optional, defaults to False) If "True", allows non-uniform requires_grad during init, which means support for interspersed frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019
sync_module_states (bool, optional, defaults to True) If "True", each individually wrapped FSDP unit will broadcast module parameters from rank 0 to ensure they are the same across all ranks after initialization
xla (bool, optional, defaults to False): Whether to use PyTorch/XLA Fully Sharded Data Parallel Training. This is an experimental feature and its API may evolve in the future.
xla_fsdp_settings (dict, optional) The value is a dictionary which stores the XLA FSDP wrapping parameters.
For a complete list of options, please see here.
xla_fsdp_grad_ckpt (bool, optional, defaults to False): Will use gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through fsdp_min_num_params or fsdp_transformer_layer_cls_to_wrap.
activation_checkpointing (bool, optional, defaults to False): If True, activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage.
deepspeed (str or dict, optional) — Use Deepspeed. This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g., ds_config.json) or an already loaded json file as a dict”
label_smoothing_factor (float, optional, defaults to 0.0) — The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s to label_smoothing_factor/num_labels and 1 - label_smoothing_factor + label_smoothing_factor/num_labels respectively.
debug (str or list of DebugOption, optional, defaults to "") — Enable one or more debug features. This is an experimental feature.
Possible options are:
"underflow_overflow": detects overflow in model’s input/outputs and reports the last frames that led to the event
"tpu_metrics_debug": print debug metrics on TPU
The options should be separated by whitespaces.
optim (str or training_args.OptimizerNames, optional, defaults to "adamw_torch") — The optimizer to use: adamw_hf, adamw_torch, adamw_torch_fused, adamw_apex_fused, adamw_anyprecision or adafactor.
optim_args (str, optional) — Optional arguments that are supplied to AnyPrecisionAdamW.
group_by_length (bool, optional, defaults to False) — Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding.
length_column_name (str, optional, defaults to "length") — Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unless group_by_length is True and the dataset is an instance of Dataset.
report_to (str or List[str], optional, defaults to "all") — The list of integrations to report the results and logs to. Supported platforms are "azure_ml", "clearml", "codecarbon", "comet_ml", "dagshub", "flyte", "mlflow", "neptune", "tensorboard", and "wandb". Use "all" to report to all integrations installed, "none" for no integrations.
ddp_find_unused_parameters (bool, optional) — When using distributed training, the value of the flag find_unused_parameters passed to DistributedDataParallel. Will default to False if gradient checkpointing is used, True otherwise.
ddp_bucket_cap_mb (int, optional) — When using distributed training, the value of the flag bucket_cap_mb passed to DistributedDataParallel.
ddp_broadcast_buffers (bool, optional) — When using distributed training, the value of the flag broadcast_buffers passed to DistributedDataParallel. Will default to False if gradient checkpointing is used, True otherwise.
dataloader_pin_memory (bool, optional, defaults to True) — Whether you want to pin memory in data loaders or not. Will default to True.
skip_memory_metrics (bool, optional, defaults to True) — Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed.
push_to_hub (bool, optional, defaults to False) — Whether or not to push the model to the Hub every time the model is saved. If this is activated, output_dir will begin a git directory synced with the repo (determined by hub_model_id) and the content will be pushed each time a save is triggered (depending on your save_strategy). Calling save_model() will also trigger a push.
If output_dir exists, it needs to be a local clone of the repository to which the Trainer will be pushed.
resume_from_checkpoint (str, optional) — The path to a folder with a valid checkpoint for your model. This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
hub_model_id (str, optional) — The name of the repository to keep in sync with the local output_dir. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model", which allows you to push to an organization you are a member of with "organization_name/model". Will default to user_name/output_dir_name with output_dir_name being the name of output_dir.
Will default to the name of output_dir.
hub_strategy (str or HubStrategy, optional, defaults to "every_save") — Defines the scope of what is pushed to the Hub and when. Possible values are:
"end": push the model, its configuration, the tokenizer (if passed along to the Trainer) and a draft of a model card when the save_model() method is called.
"every_save": push the model, its configuration, the tokenizer (if passed along to the Trainer) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training.
"checkpoint": like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint").
"all_checkpoints": like "checkpoint" but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository)
hub_token (str, optional) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login.
hub_private_repo (bool, optional, defaults to False) — If True, the Hub repo will be set to private.
hub_always_push (bool, optional, defaults to False) — Unless this is True, the Trainer will skip pushing a checkpoint when the previous push is not finished.
gradient_checkpointing (bool, optional, defaults to False) — If True, use gradient checkpointing to save memory at the expense of slower backward pass.
include_inputs_for_metrics (bool, optional, defaults to False) — Whether or not the inputs will be passed to the compute_metrics function. This is intended for metrics that need inputs, predictions and references for scoring calculation in Metric class.
auto_find_batch_size (bool, optional, defaults to False) — Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed (pip install accelerate)
full_determinism (bool, optional, defaults to False) — If True, enable_full_determinism() is called instead of set_seed() to ensure reproducible results in distributed training. Important: this will negatively impact the performance, so only use it for debugging.
torchdynamo (str, optional) — If set, the backend compiler for TorchDynamo. Possible choices are "eager", "aot_eager", "inductor", "nvfuser", "aot_nvfuser", "aot_cudagraphs", "ofi", "fx2trt", "onnxrt" and "ipex".
ray_scope (str, optional, defaults to "last") — The scope to use when doing hyperparameter search with Ray. By default, "last" will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the Ray documentation for more options.
ddp_timeout (int, optional, defaults to 1800) — The timeout for torch.distributed.init_process_group calls, used to avoid GPU socket timeouts when performing slow operations in distributed runnings. Please refer the [PyTorch documentation] (https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) for more information.
use_mps_device (bool, optional, defaults to False) — This argument is deprecated.mps device will be used if it is available similar to cuda device.
torch_compile (bool, optional, defaults to False) — Whether or not to compile the model using PyTorch 2.0 torch.compile.
This will use the best defaults for the torch.compile API. You can customize the defaults with the argument torch_compile_backend and torch_compile_mode but we don’t guarantee any of them will work as the support is progressively rolled in in PyTorch.
This flag and the whole compile API is experimental and subject to change in future releases.
torch_compile_backend (str, optional) — The backend to use in torch.compile. If set to any value, torch_compile will be set to True.
Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions.
This flag is experimental and subject to change in future releases.
torch_compile_mode (str, optional) — The mode to use in torch.compile. If set to any value, torch_compile will be set to True.
Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions.
This flag is experimental and subject to change in future releases.
include_tokens_per_second (bool, optional) — Whether or not to compute the number of tokens per second per device for training speed metrics.
This will iterate over the entire training dataloader once beforehand,
and will slow down the entire process.
TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.
Returns the log level to be used depending on whether this process is the main process of node 0, main process of node non-0, or a non-main process.
For the main process the log level defaults to the logging level set (logging.WARNING if you didn’t do anything) unless overridden by log_level argument.
For the replica processes the log level defaults to logging.WARNING unless overridden by log_level_replica argument.
The choice between the main and replica process settings is made according to the return value of should_log.
get_warmup_steps
< source >
( num_training_steps: int )
Get number of steps used for a linear warmup.
main_process_first
< source >
( local = True desc = 'work' )
Parameters
local (bool, optional, defaults to True) — if True first means process of rank 0 of each node if False first means process of rank 0 of node rank 0 In multi-node environment with a shared filesystem you most likely will want to use local=False so that only the main process of the first node will do the processing. If however, the filesystem is not shared, then the main process of each node will need to do the processing, which is the default behavior.
desc (str, optional, defaults to "work") — a work description to be used in debug logs
A context manager for torch distributed environment where on needs to do something on the main process, while blocking replicas, and when it’s finished releasing the replicas.
One such use is for datasets’s map feature which to be efficient should be run once on the main process, which upon completion saves a cached version of results and which then automatically gets loaded by the replicas.
set_dataloader
< source >
( train_batch_size: int = 8 eval_batch_size: int = 8 drop_last: bool = False num_workers: int = 0 pin_memory: bool = True auto_find_batch_size: bool = False ignore_data_skip: bool = False sampler_seed: typing.Optional[int] = None )
Parameters
drop_last (bool, optional, defaults to False) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.
num_workers (int, optional, defaults to 0) — Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.
pin_memory (bool, optional, defaults to True) — Whether you want to pin memory in data loaders or not. Will default to True.
auto_find_batch_size (bool, optional, defaults to False) — Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed (pip install accelerate)
ignore_data_skip (bool, optional, defaults to False) — When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.
sampler_seed (int, optional) — Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as self.seed. This can be used to ensure reproducibility of data sampling, independent of the model seed.
A method that regroups all arguments linked to the dataloaders creation.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_dataloader(train_batch_size=16, eval_batch_size=64)
>>> args.per_device_train_batch_size
16
set_evaluate
< source >
( strategy: typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'no' steps: int = 500 batch_size: int = 8 accumulation_steps: typing.Optional[int] = None delay: typing.Optional[float] = None loss_only: bool = False jit_mode: bool = False )
Parameters
strategy (str or IntervalStrategy, optional, defaults to "no") — The evaluation strategy to adopt during training. Possible values are:
"no": No evaluation is done during training.
"steps": Evaluation is done (and logged) every steps.
"epoch": Evaluation is done at the end of each epoch.
Setting a strategy different from "no" will set self.do_eval to True.
steps (int, optional, defaults to 500) — Number of update steps between two evaluations if strategy="steps".
batch_size (int optional, defaults to 8) — The batch size per device (GPU/TPU core/CPU…) used for evaluation.
accumulation_steps (int, optional) — Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).
delay (float, optional) — Number of epochs or steps to wait for before the first evaluation can be performed, depending on the evaluation_strategy.
loss_only (bool, optional, defaults to False) — Ignores all outputs except the loss.
jit_mode (bool, optional) — Whether or not to use PyTorch jit trace for inference.
A method that regroups all arguments linked to the evaluation.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_evaluate(strategy="steps", steps=100)
>>> args.eval_steps
100
set_logging
< source >
( strategy: typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'steps' steps: int = 500 report_to: typing.Union[str, typing.List[str]] = 'none' level: str = 'passive' first_step: bool = False nan_inf_filter: bool = False on_each_node: bool = False replica_level: str = 'passive' )
Parameters
strategy (str or IntervalStrategy, optional, defaults to "steps") — The logging strategy to adopt during training. Possible values are:
"no": No save is done during training.
"epoch": Save is done at the end of each epoch.
"steps": Save is done every save_steps.
steps (int, optional, defaults to 500) — Number of update steps between two logs if strategy="steps".
level (str, optional, defaults to "passive") — Logger log level to use on the main process. Possible choices are the log levels as strings: "debug", "info", "warning", "error" and "critical", plus a "passive" level which doesn’t set anything and lets the application set the level.
report_to (str or List[str], optional, defaults to "none") — The list of integrations to report the results and logs to. Supported platforms are "azure_ml", "comet_ml", "mlflow", "neptune", "tensorboard","clearml" and "wandb". Use "all" to report to all integrations installed, "none" for no integrations.
first_step (bool, optional, defaults to False) — Whether to log and evaluate the first global_step or not.
nan_inf_filter (bool, optional, defaults to True) — Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead.
nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model.
on_each_node (bool, optional, defaults to True) — In multinode distributed training, whether to log using log_level once per node, or only on the main node.
replica_level (str, optional, defaults to "passive") — Logger log level to use on replicas. Same choices as log_level
A method that regroups all arguments linked to the evaluation.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_logging(strategy="steps", steps=100)
>>> args.logging_steps
100
set_lr_scheduler
< source >
( name: typing.Union[str, transformers.trainer_utils.SchedulerType] = 'linear' num_epochs: float = 3.0 max_steps: int = -1 warmup_ratio: float = 0 warmup_steps: int = 0 )
Parameters
name (str or SchedulerType, optional, defaults to "linear") — The scheduler type to use. See the documentation of SchedulerType for all possible values.
num_epochs(float, optional, defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).
max_steps (int, optional, defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. In case of using a finite iterable dataset the training may stop before reaching the set number of steps when all data is exhausted.
warmup_ratio (float, optional, defaults to 0.0) — Ratio of total training steps used for a linear warmup from 0 to learning_rate.
warmup_steps (int, optional, defaults to 0) — Number of steps used for a linear warmup from 0 to learning_rate. Overrides any effect of warmup_ratio.
A method that regroups all arguments linked to the learning rate scheduler and its hyperparameters.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_lr_scheduler(name="cosine", warmup_ratio=0.05)
>>> args.warmup_ratio
0.05
set_optimizer
< source >
( name: typing.Union[str, transformers.training_args.OptimizerNames] = 'adamw_torch' learning_rate: float = 5e-05 weight_decay: float = 0 beta1: float = 0.9 beta2: float = 0.999 epsilon: float = 1e-08 args: typing.Optional[str] = None )
Parameters
name (str or training_args.OptimizerNames, optional, defaults to "adamw_torch") — The optimizer to use: "adamw_hf", "adamw_torch", "adamw_torch_fused", "adamw_apex_fused", "adamw_anyprecision" or "adafactor".
learning_rate (float, optional, defaults to 5e-5) — The initial learning rate.
weight_decay (float, optional, defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights.
beta1 (float, optional, defaults to 0.9) — The beta1 hyperparameter for the adam optimizer or its variants.
beta2 (float, optional, defaults to 0.999) — The beta2 hyperparameter for the adam optimizer or its variants.
epsilon (float, optional, defaults to 1e-8) — The epsilon hyperparameter for the adam optimizer or its variants.
args (str, optional) — Optional arguments that are supplied to AnyPrecisionAdamW (only useful when optim="adamw_anyprecision").
A method that regroups all arguments linked to the optimizer and its hyperparameters.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_optimizer(name="adamw_torch", beta1=0.8)
>>> args.optim
'adamw_torch'
set_push_to_hub
< source >
( model_id: str strategy: typing.Union[str, transformers.trainer_utils.HubStrategy] = 'every_save' token: typing.Optional[str] = None private_repo: bool = False always_push: bool = False )
Parameters
model_id (str) — The name of the repository to keep in sync with the local output_dir. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model", which allows you to push to an organization you are a member of with "organization_name/model".
strategy (str or HubStrategy, optional, defaults to "every_save") — Defines the scope of what is pushed to the Hub and when. Possible values are:
"end": push the model, its configuration, the tokenizer (if passed along to the Trainer) and a draft of a model card when the save_model() method is called.
"every_save": push the model, its configuration, the tokenizer (if passed along to the Trainer) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training.
"checkpoint": like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint").
"all_checkpoints": like "checkpoint" but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository)
token (str, optional) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login.
private_repo (bool, optional, defaults to False) — If True, the Hub repo will be set to private.
always_push (bool, optional, defaults to False) — Unless this is True, the Trainer will skip pushing a checkpoint when the previous push is not finished.
A method that regroups all arguments linked to synchronizing checkpoints with the Hub.
Calling this method will set self.push_to_hub to True, which means the output_dir will begin a git directory synced with the repo (determined by model_id) and the content will be pushed each time a save is triggered (depending onself.save_strategy). Calling save_model() will also trigger a push.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_push_to_hub("me/awesome-model")
>>> args.hub_model_id
'me/awesome-model'
set_save
< source >
( strategy: typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'steps' steps: int = 500 total_limit: typing.Optional[int] = None on_each_node: bool = False )
Parameters
strategy (str or IntervalStrategy, optional, defaults to "steps") — The checkpoint save strategy to adopt during training. Possible values are:
"no": No save is done during training.
"epoch": Save is done at the end of each epoch.
"steps": Save is done every save_steps.
steps (int, optional, defaults to 500) — Number of updates steps before two checkpoint saves if strategy="steps".
total_limit (int, optional) — If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir.
on_each_node (bool, optional, defaults to False) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one.
This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node.
A method that regroups all arguments linked to the evaluation.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_save(strategy="steps", steps=100)
>>> args.save_steps
100
set_testing
< source >
( batch_size: int = 8 loss_only: bool = False jit_mode: bool = False )
Parameters
batch_size (int optional, defaults to 8) — The batch size per device (GPU/TPU core/CPU…) used for testing.
loss_only (bool, optional, defaults to False) — Ignores all outputs except the loss.
jit_mode (bool, optional) — Whether or not to use PyTorch jit trace for inference.
A method that regroups all basic arguments linked to testing on a held-out dataset.
Calling this method will automatically set self.do_predict to True.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_testing(batch_size=32)
>>> args.per_device_eval_batch_size
32
set_training
< source >
( learning_rate: float = 5e-05 batch_size: int = 8 weight_decay: float = 0 num_epochs: float = 3 max_steps: int = -1 gradient_accumulation_steps: int = 1 seed: int = 42 gradient_checkpointing: bool = False )
Parameters
learning_rate (float, optional, defaults to 5e-5) — The initial learning rate for the optimizer.
batch_size (int optional, defaults to 8) — The batch size per device (GPU/TPU core/CPU…) used for training.
weight_decay (float, optional, defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in the optimizer.
num_train_epochs(float, optional, defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).
max_steps (int, optional, defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. In case of using a finite iterable dataset the training may stop before reaching the set number of steps when all data is exhausted.
gradient_accumulation_steps (int, optional, defaults to 1) — Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples.
seed (int, optional, defaults to 42) — Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the ~Trainer.model_init function to instantiate the model if it has some randomly initialized parameters.
gradient_checkpointing (bool, optional, defaults to False) — If True, use gradient checkpointing to save memory at the expense of slower backward pass.
A method that regroups all basic arguments linked to the training.
Calling this method will automatically set self.do_train to True.
Example:
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("working_dir")
>>> args = args.set_training(learning_rate=1e-4, batch_size=32)
>>> args.learning_rate
1e-4
Serializes this instance while replace Enum by their values (for JSON serialization support). It obfuscates the token values by removing their value.
Serializes this instance to a JSON string.
Sanitized serialization to use with TensorBoard’s hparams
Seq2SeqTrainingArguments
class transformers.Seq2SeqTrainingArguments
< source >
( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False evaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: typing.Optional[str] = 'passive' log_level_replica: typing.Optional[str] = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: typing.Optional[bool] = False save_on_each_node: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[typing.List[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False sharded_ddp: typing.Union[typing.List[transformers.trainer_utils.ShardedDDPOption], str, NoneType] = '' fsdp: typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params: int = 0 fsdp_config: typing.Optional[str] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None deepspeed: typing.Optional[str] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Optional[typing.List[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: bool = False include_inputs_for_metrics: bool = False fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: typing.Optional[int] = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None dispatch_batches: typing.Optional[bool] = None include_tokens_per_second: typing.Optional[bool] = False sortish_sampler: bool = False predict_with_generate: bool = False generation_max_length: typing.Optional[int] = None generation_num_beams: typing.Optional[int] = None generation_config: typing.Union[str, pathlib.Path, transformers.generation.configuration_utils.GenerationConfig, NoneType] = None )
Parameters
output_dir (str) — The output directory where the model predictions and checkpoints will be written.
overwrite_output_dir (bool, optional, defaults to False) — If True, overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory.
do_train (bool, optional, defaults to False) — Whether to run training or not. This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
do_eval (bool, optional) — Whether to run evaluation on the validation set or not. Will be set to True if evaluation_strategy is different from "no". This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
do_predict (bool, optional, defaults to False) — Whether to run predictions on the test set or not. This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
evaluation_strategy (str or IntervalStrategy, optional, defaults to "no") — The evaluation strategy to adopt during training. Possible values are:
"no": No evaluation is done during training.
"steps": Evaluation is done (and logged) every eval_steps.
"epoch": Evaluation is done at the end of each epoch.
prediction_loss_only (bool, optional, defaults to False) — When performing evaluation and generating predictions, only returns the loss.
per_device_train_batch_size (int, optional, defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training.
per_device_eval_batch_size (int, optional, defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for evaluation.
gradient_accumulation_steps (int, optional, defaults to 1) — Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples.
eval_accumulation_steps (int, optional) — Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/NPU/TPU before being moved to the CPU (faster but requires more memory).
eval_delay (float, optional) — Number of epochs or steps to wait for before the first evaluation can be performed, depending on the evaluation_strategy.
learning_rate (float, optional, defaults to 5e-5) — The initial learning rate for AdamW optimizer.
weight_decay (float, optional, defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in AdamW optimizer.
adam_beta1 (float, optional, defaults to 0.9) — The beta1 hyperparameter for the AdamW optimizer.
adam_beta2 (float, optional, defaults to 0.999) — The beta2 hyperparameter for the AdamW optimizer.
adam_epsilon (float, optional, defaults to 1e-8) — The epsilon hyperparameter for the AdamW optimizer.
max_grad_norm (float, optional, defaults to 1.0) — Maximum gradient norm (for gradient clipping).
num_train_epochs(float, optional, defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).
max_steps (int, optional, defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. In case of using a finite iterable dataset the training may stop before reaching the set number of steps when all data is exhausted
lr_scheduler_type (str or SchedulerType, optional, defaults to "linear") — The scheduler type to use. See the documentation of SchedulerType for all possible values.
warmup_ratio (float, optional, defaults to 0.0) — Ratio of total training steps used for a linear warmup from 0 to learning_rate.
warmup_steps (int, optional, defaults to 0) — Number of steps used for a linear warmup from 0 to learning_rate. Overrides any effect of warmup_ratio.
log_level (str, optional, defaults to passive) — Logger log level to use on the main process. Possible choices are the log levels as strings: ‘debug’, ‘info’, ‘warning’, ‘error’ and ‘critical’, plus a ‘passive’ level which doesn’t set anything and keeps the current log level for the Transformers library (which will be "warning" by default).
log_level_replica (str, optional, defaults to "warning") — Logger log level to use on replicas. Same choices as log_level”
log_on_each_node (bool, optional, defaults to True) — In multinode distributed training, whether to log using log_level once per node, or only on the main node.
logging_dir (str, optional) — TensorBoard log directory. Will default to *output_dir/runs/CURRENT_DATETIME_HOSTNAME*.
logging_strategy (str or IntervalStrategy, optional, defaults to "steps") — The logging strategy to adopt during training. Possible values are:
"no": No logging is done during training.
"epoch": Logging is done at the end of each epoch.
"steps": Logging is done every logging_steps.
logging_first_step (bool, optional, defaults to False) — Whether to log and evaluate the first global_step or not.
logging_steps (int or float, optional, defaults to 500) — Number of update steps between two logs if logging_strategy="steps". Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.
logging_nan_inf_filter (bool, optional, defaults to True) — Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead.
logging_nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model.
save_strategy (str or IntervalStrategy, optional, defaults to "steps") — The checkpoint save strategy to adopt during training. Possible values are:
"no": No save is done during training.
"epoch": Save is done at the end of each epoch.
"steps": Save is done every save_steps.
save_steps (int or float, optional, defaults to 500) — Number of updates steps before two checkpoint saves if save_strategy="steps". Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.
save_total_limit (int, optional) — If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir. When load_best_model_at_end is enabled, the “best” checkpoint according to metric_for_best_model will always be retained in addition to the most recent ones. For example, for save_total_limit=5 and load_best_model_at_end, the four last checkpoints will always be retained alongside the best model. When save_total_limit=1 and load_best_model_at_end, it is possible that two checkpoints are saved: the last one and the best one (if they are different).
save_safetensors (bool, optional, defaults to False) — Use safetensors saving and loading for state dicts instead of default torch.load and torch.save.
save_on_each_node (bool, optional, defaults to False) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one.
This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node.
use_cpu (bool, optional, defaults to False) — Whether or not to use cpu. If set to False, we will use cuda or mps device if available.
seed (int, optional, defaults to 42) — Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the ~Trainer.model_init function to instantiate the model if it has some randomly initialized parameters.
data_seed (int, optional) — Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as seed. This can be used to ensure reproducibility of data sampling, independent of the model seed.
jit_mode_eval (bool, optional, defaults to False) — Whether or not to use PyTorch jit trace for inference.
use_ipex (bool, optional, defaults to False) — Use Intel extension for PyTorch when it is available. IPEX installation.
bf16 (bool, optional, defaults to False) — Whether to use bf16 16-bit (mixed) precision training instead of 32-bit training. Requires Ampere or higher NVIDIA architecture or using CPU (use_cpu) or Ascend NPU. This is an experimental API and it may change.
fp16 (bool, optional, defaults to False) — Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training.
fp16_opt_level (str, optional, defaults to ‘O1’) — For fp16 training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation.
fp16_backend (str, optional, defaults to "auto") — This argument is deprecated. Use half_precision_backend instead.
half_precision_backend (str, optional, defaults to "auto") — The backend to use for mixed precision training. Must be one of "auto", "cuda_amp", "apex", "cpu_amp". "auto" will use CPU/CUDA AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend.
bf16_full_eval (bool, optional, defaults to False) — Whether to use full bfloat16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values. This is an experimental API and it may change.
fp16_full_eval (bool, optional, defaults to False) — Whether to use full float16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values.
tf32 (bool, optional) — Whether to enable the TF32 mode, available in Ampere and newer GPU architectures. The default value depends on PyTorch’s version default of torch.backends.cuda.matmul.allow_tf32. For more details please refer to the TF32 documentation. This is an experimental API and it may change.
local_rank (int, optional, defaults to -1) — Rank of the process during distributed training.
ddp_backend (str, optional) — The backend to use for distributed training. Must be one of "nccl", "mpi", "ccl", "gloo", "hccl".
tpu_num_cores (int, optional) — When training on TPU, the number of TPU cores (automatically passed by launcher script).
dataloader_drop_last (bool, optional, defaults to False) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.
eval_steps (int or float, optional) — Number of update steps between two evaluations if evaluation_strategy="steps". Will default to the same value as logging_steps if not set. Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.
dataloader_num_workers (int, optional, defaults to 0) — Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.
past_index (int, optional, defaults to -1) — Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
run_name (str, optional) — A descriptor for the run. Typically used for wandb and mlflow logging.
disable_tqdm (bool, optional) — Whether or not to disable the tqdm progress bars and table of metrics produced by ~notebook.NotebookTrainingTracker in Jupyter Notebooks. Will default to True if the logging level is set to warn or lower (default), False otherwise.
remove_unused_columns (bool, optional, defaults to True) — Whether or not to automatically remove the columns unused by the model forward method.
(Note that this behavior is not implemented for TFTrainer yet.)
label_names (List[str], optional) — The list of keys in your dictionary of inputs that correspond to the labels.
Will eventually default to the list of argument names accepted by the model that contain the word “label”, except if the model used is one of the XxxForQuestionAnswering in which case it will also include the ["start_positions", "end_positions"] keys.
load_best_model_at_end (bool, optional, defaults to False) — Whether or not to load the best model found during training at the end of training. When this option is enabled, the best checkpoint will always be saved. See save_total_limit for more.
When set to True, the parameters save_strategy needs to be the same as evaluation_strategy, and in the case it is “steps”, save_steps must be a round multiple of eval_steps.
metric_for_best_model (str, optional) — Use in conjunction with load_best_model_at_end to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix "eval_". Will default to "loss" if unspecified and load_best_model_at_end=True (to use the evaluation loss).
If you set this value, greater_is_better will default to True. Don’t forget to set it to False if your metric is better when lower.
greater_is_better (bool, optional) — Use in conjunction with load_best_model_at_end and metric_for_best_model to specify if better models should have a greater metric or not. Will default to:
True if metric_for_best_model is set to a value that isn’t "loss" or "eval_loss".
False if metric_for_best_model is not set, or set to "loss" or "eval_loss".
ignore_data_skip (bool, optional, defaults to False) — When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.
sharded_ddp (bool, str or list of ShardedDDPOption, optional, defaults to '') — Use Sharded DDP training from FairScale (in distributed training only). This is an experimental feature.
A list of options along the following:
"simple": to use first instance of sharded DDP released by fairscale (ShardedDDP) similar to ZeRO-2.
"zero_dp_2": to use the second instance of sharded DPP released by fairscale (FullyShardedDDP) in Zero-2 mode (with reshard_after_forward=False).
"zero_dp_3": to use the second instance of sharded DPP released by fairscale (FullyShardedDDP) in Zero-3 mode (with reshard_after_forward=True).
"offload": to add ZeRO-offload (only compatible with "zero_dp_2" and "zero_dp_3").
If a string is passed, it will be split on space. If a bool is passed, it will be converted to an empty list for False and ["simple"] for True.
fsdp (bool, str or list of FSDPOption, optional, defaults to '') — Use PyTorch Distributed Parallel Training (in distributed training only).
A list of options along the following:
"full_shard": Shard parameters, gradients and optimizer states.
"shard_grad_op": Shard optimizer states and gradients.
"offload": Offload parameters and gradients to CPUs (only compatible with "full_shard" and "shard_grad_op").
"auto_wrap": Automatically recursively wrap layers with FSDP using default_auto_wrap_policy.
fsdp_config (str or dict, optional) — Config to be used with fsdp (Pytorch Distributed Parallel Training). The value is either a location of deepspeed json config file (e.g., ds_config.json) or an already loaded json file as dict.
A List of config and its options:
min_num_params (int, optional, defaults to 0): FSDP’s minimum number of parameters for Default Auto Wrapping. (useful only when fsdp field is passed).
transformer_layer_cls_to_wrap (List[str], optional): List of transformer layer class names (case-sensitive) to wrap, e.g, BertLayer, GPTJBlock, T5Block … (useful only when fsdp flag is passed).
backward_prefetch (str, optional) FSDP’s backward prefetch mode. Controls when to prefetch next set of parameters (useful only when fsdp field is passed).
A list of options along the following:
"backward_pre" : Prefetches the next set of parameters before the current set of parameter’s gradient computation.
"backward_post" : This prefetches the next set of parameters after the current set of parameter’s gradient computation.
forward_prefetch (bool, optional, defaults to False) FSDP’s forward prefetch mode (useful only when fsdp field is passed). If "True", then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass.
limit_all_gathers (bool, optional, defaults to False) FSDP’s limit_all_gathers (useful only when fsdp field is passed). If "True", FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers.
use_orig_params (bool, optional, defaults to False) If "True", allows non-uniform requires_grad during init, which means support for interspersed frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. Please refer this [blog](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019
sync_module_states (bool, optional, defaults to True) If "True", each individually wrapped FSDP unit will broadcast module parameters from rank 0 to ensure they are the same across all ranks after initialization
xla (bool, optional, defaults to False): Whether to use PyTorch/XLA Fully Sharded Data Parallel Training. This is an experimental feature and its API may evolve in the future.
xla_fsdp_settings (dict, optional) The value is a dictionary which stores the XLA FSDP wrapping parameters.
For a complete list of options, please see here.
xla_fsdp_grad_ckpt (bool, optional, defaults to False): Will use gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through fsdp_min_num_params or fsdp_transformer_layer_cls_to_wrap.
activation_checkpointing (bool, optional, defaults to False): If True, activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage.
deepspeed (str or dict, optional) — Use Deepspeed. This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g., ds_config.json) or an already loaded json file as a dict”
label_smoothing_factor (float, optional, defaults to 0.0) — The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s to label_smoothing_factor/num_labels and 1 - label_smoothing_factor + label_smoothing_factor/num_labels respectively.
debug (str or list of DebugOption, optional, defaults to "") — Enable one or more debug features. This is an experimental feature.
Possible options are:
"underflow_overflow": detects overflow in model’s input/outputs and reports the last frames that led to the event
"tpu_metrics_debug": print debug metrics on TPU
The options should be separated by whitespaces.
optim (str or training_args.OptimizerNames, optional, defaults to "adamw_torch") — The optimizer to use: adamw_hf, adamw_torch, adamw_torch_fused, adamw_apex_fused, adamw_anyprecision or adafactor.
optim_args (str, optional) — Optional arguments that are supplied to AnyPrecisionAdamW.
group_by_length (bool, optional, defaults to False) — Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding.
length_column_name (str, optional, defaults to "length") — Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unless group_by_length is True and the dataset is an instance of Dataset.
report_to (str or List[str], optional, defaults to "all") — The list of integrations to report the results and logs to. Supported platforms are "azure_ml", "clearml", "codecarbon", "comet_ml", "dagshub", "flyte", "mlflow", "neptune", "tensorboard", and "wandb". Use "all" to report to all integrations installed, "none" for no integrations.
ddp_find_unused_parameters (bool, optional) — When using distributed training, the value of the flag find_unused_parameters passed to DistributedDataParallel. Will default to False if gradient checkpointing is used, True otherwise.
ddp_bucket_cap_mb (int, optional) — When using distributed training, the value of the flag bucket_cap_mb passed to DistributedDataParallel.
ddp_broadcast_buffers (bool, optional) — When using distributed training, the value of the flag broadcast_buffers passed to DistributedDataParallel. Will default to False if gradient checkpointing is used, True otherwise.
dataloader_pin_memory (bool, optional, defaults to True) — Whether you want to pin memory in data loaders or not. Will default to True.
skip_memory_metrics (bool, optional, defaults to True) — Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed.
push_to_hub (bool, optional, defaults to False) — Whether or not to push the model to the Hub every time the model is saved. If this is activated, output_dir will begin a git directory synced with the repo (determined by hub_model_id) and the content will be pushed each time a save is triggered (depending on your save_strategy). Calling save_model() will also trigger a push.
If output_dir exists, it needs to be a local clone of the repository to which the Trainer will be pushed.
resume_from_checkpoint (str, optional) — The path to a folder with a valid checkpoint for your model. This argument is not directly used by Trainer, it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details.
hub_model_id (str, optional) — The name of the repository to keep in sync with the local output_dir. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model", which allows you to push to an organization you are a member of with "organization_name/model". Will default to user_name/output_dir_name with output_dir_name being the name of output_dir.
Will default to the name of output_dir.
hub_strategy (str or HubStrategy, optional, defaults to "every_save") — Defines the scope of what is pushed to the Hub and when. Possible values are:
"end": push the model, its configuration, the tokenizer (if passed along to the Trainer) and a draft of a model card when the save_model() method is called.
"every_save": push the model, its configuration, the tokenizer (if passed along to the Trainer) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training.
"checkpoint": like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint").
"all_checkpoints": like "checkpoint" but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository)
hub_token (str, optional) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login.
hub_private_repo (bool, optional, defaults to False) — If True, the Hub repo will be set to private.
hub_always_push (bool, optional, defaults to False) — Unless this is True, the Trainer will skip pushing a checkpoint when the previous push is not finished.
gradient_checkpointing (bool, optional, defaults to False) — If True, use gradient checkpointing to save memory at the expense of slower backward pass.
include_inputs_for_metrics (bool, optional, defaults to False) — Whether or not the inputs will be passed to the compute_metrics function. This is intended for metrics that need inputs, predictions and references for scoring calculation in Metric class.
auto_find_batch_size (bool, optional, defaults to False) — Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed (pip install accelerate)
full_determinism (bool, optional, defaults to False) — If True, enable_full_determinism() is called instead of set_seed() to ensure reproducible results in distributed training. Important: this will negatively impact the performance, so only use it for debugging.
torchdynamo (str, optional) — If set, the backend compiler for TorchDynamo. Possible choices are "eager", "aot_eager", "inductor", "nvfuser", "aot_nvfuser", "aot_cudagraphs", "ofi", "fx2trt", "onnxrt" and "ipex".
ray_scope (str, optional, defaults to "last") — The scope to use when doing hyperparameter search with Ray. By default, "last" will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the Ray documentation for more options.
ddp_timeout (int, optional, defaults to 1800) — The timeout for torch.distributed.init_process_group calls, used to avoid GPU socket timeouts when performing slow operations in distributed runnings. Please refer the [PyTorch documentation] (https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) for more information.
use_mps_device (bool, optional, defaults to False) — This argument is deprecated.mps device will be used if it is available similar to cuda device.
torch_compile (bool, optional, defaults to False) — Whether or not to compile the model using PyTorch 2.0 torch.compile.
This will use the best defaults for the torch.compile API. You can customize the defaults with the argument torch_compile_backend and torch_compile_mode but we don’t guarantee any of them will work as the support is progressively rolled in in PyTorch.
This flag and the whole compile API is experimental and subject to change in future releases.
torch_compile_backend (str, optional) — The backend to use in torch.compile. If set to any value, torch_compile will be set to True.
Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions.
This flag is experimental and subject to change in future releases.
torch_compile_mode (str, optional) — The mode to use in torch.compile. If set to any value, torch_compile will be set to True.
Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions.
This flag is experimental and subject to change in future releases.
include_tokens_per_second (bool, optional) — Whether or not to compute the number of tokens per second per device for training speed metrics.
This will iterate over the entire training dataloader once beforehand,
and will slow down the entire process.
sortish_sampler (bool, optional, defaults to False) — Whether to use a sortish sampler or not. Only possible if the underlying datasets are Seq2SeqDataset for now but will become generally available in the near future.
It sorts the inputs according to lengths in order to minimize the padding size, with a bit of randomness for the training set.
predict_with_generate (bool, optional, defaults to False) — Whether to use generate to calculate generative metrics (ROUGE, BLEU).
generation_max_length (int, optional) — The max_length to use on each evaluation loop when predict_with_generate=True. Will default to the max_length value of the model configuration.
generation_num_beams (int, optional) — The num_beams to use on each evaluation loop when predict_with_generate=True. Will default to the num_beams value of the model configuration.
generation_config (str or Path or GenerationConfig, optional) — Allows to load a GenerationConfig from the from_pretrained method. This can be either:
a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/.
a GenerationConfig object.
TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself.
Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.
Serializes this instance while replace Enum by their values and GenerationConfig by dictionaries (for JSON serialization support). It obfuscates the token values by removing their value.
Checkpoints
By default, Trainer will save all checkpoints in the output_dir you set in the TrainingArguments you are using. Those will go in subfolder named checkpoint-xxx with xxx being the step at which the training was at.
Resuming training from a checkpoint can be done when calling Trainer.train() with either:
resume_from_checkpoint=True which will resume training from the latest checkpoint
resume_from_checkpoint=checkpoint_dir which will resume training from the specific checkpoint in the directory passed.
In addition, you can easily save your checkpoints on the Model Hub when using push_to_hub=True. By default, all the models saved in intermediate checkpoints are saved in different commits, but not the optimizer state. You can adapt the hub-strategy value of your TrainingArguments to either:
"checkpoint": the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="output_dir/last-checkpoint").
"all_checkpoints": all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository)
Logging
By default Trainer will use logging.INFO for the main process and logging.WARNING for the replicas if any.
These defaults can be overridden to use any of the 5 logging levels with TrainingArguments’s arguments:
log_level - for the main process
log_level_replica - for the replicas
Further, if TrainingArguments’s log_on_each_node is set to False only the main node will use the log level settings for its main process, all other nodes will use the log level settings for replicas.
Note that Trainer is going to set transformers’s log level separately for each node in its Trainer.__init__(). So you may want to set this sooner (see the next example) if you tap into other transformers functionality before creating the Trainer object.
Here is an example of how this can be used in an application:
[...]
logger = logging.getLogger(__name__)
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
trainer = Trainer(...)
And then if you only want to see warnings on the main node and all other nodes to not print any most likely duplicated warnings you could run it as:
my_app.py ... --log_level warning --log_level_replica error
In the multi-node environment if you also don’t want the logs to repeat for each node’s main process, you will want to change the above to:
my_app.py ... --log_level warning --log_level_replica error --log_on_each_node 0
and then only the main process of the first node will log at the “warning” level, and all other processes on the main node and all processes on other nodes will log at the “error” level.
If you need your application to be as quiet as possible you could do:
my_app.py ... --log_level error --log_level_replica error --log_on_each_node 0
(add --log_on_each_node 0 if on multi-node environment)
Randomness
When resuming from a checkpoint generated by Trainer all efforts are made to restore the python, numpy and pytorch RNG states to the same states as they were at the moment of saving that checkpoint, which should make the “stop and resume” style of training as close as possible to non-stop training.
However, due to various default non-deterministic pytorch settings this might not fully work. If you want full determinism please refer to Controlling sources of randomness. As explained in the document, that some of those settings that make things deterministic (.e.g., torch.backends.cudnn.deterministic) may slow things down, therefore this can’t be done by default, but you can enable those yourself if needed.
Specific GPUs Selection
Let’s discuss how you can tell your program which GPUs are to be used and in what order.
When using DistributedDataParallel to use only a subset of your GPUs, you simply specify the number of GPUs to use. For example, if you have 4 GPUs, but you wish to use the first 2 you can do:
python -m torch.distributed.launch --nproc_per_node=2 trainer-program.py ...
if you have either accelerate or deepspeed installed you can also accomplish the same by using one of:
accelerate launch --num_processes 2 trainer-program.py ...
deepspeed --num_gpus 2 trainer-program.py ...
You don’t need to use the Accelerate or the Deepspeed integration features to use these launchers.
Until now you were able to tell the program how many GPUs to use. Now let’s discuss how to select specific GPUs and control their order.
The following environment variables help you control which GPUs to use and their order.
CUDA_VISIBLE_DEVICES
If you have multiple GPUs and you’d like to use only 1 or a few of those GPUs, set the environment variable CUDA_VISIBLE_DEVICES to a list of the GPUs to be used.
For example, let’s say you have 4 GPUs: 0, 1, 2 and 3. To run only on the physical GPUs 0 and 2, you can do:
CUDA_VISIBLE_DEVICES=0,2 python -m torch.distributed.launch trainer-program.py ...
So now pytorch will see only 2 GPUs, where your physical GPUs 0 and 2 are mapped to cuda:0 and cuda:1 correspondingly.
You can even change their order:
CUDA_VISIBLE_DEVICES=2,0 python -m torch.distributed.launch trainer-program.py ...
Here your physical GPUs 0 and 2 are mapped to cuda:1 and cuda:0 correspondingly.
The above examples were all for DistributedDataParallel use pattern, but the same method works for DataParallel as well:
CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py ...
To emulate an environment without GPUs simply set this environment variable to an empty value like so:
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
As with any environment variable you can, of course, export those instead of adding these to the command line, as in:
export CUDA_VISIBLE_DEVICES=0,2
python -m torch.distributed.launch trainer-program.py ...
but this approach can be confusing since you may forget you set up the environment variable earlier and not understand why the wrong GPUs are used. Therefore, it’s a common practice to set the environment variable just for a specific run on the same command line as it’s shown in most examples of this section.
CUDA_DEVICE_ORDER
There is an additional environment variable CUDA_DEVICE_ORDER that controls how the physical devices are ordered. The two choices are:
ordered by PCIe bus IDs (matches nvidia-smi’s order) - this is the default.
export CUDA_DEVICE_ORDER=PCI_BUS_ID
ordered by GPU compute capabilities
export CUDA_DEVICE_ORDER=FASTEST_FIRST
Most of the time you don’t need to care about this environment variable, but it’s very helpful if you have a lopsided setup where you have an old and a new GPUs physically inserted in such a way so that the slow older card appears to be first. One way to fix that is to swap the cards. But if you can’t swap the cards (e.g., if the cooling of the devices gets impacted) then setting CUDA_DEVICE_ORDER=FASTEST_FIRST will always put the newer faster card first. It’ll be somewhat confusing though since nvidia-smi will still report them in the PCIe order.
The other solution to swapping the order is to use:
export CUDA_VISIBLE_DEVICES=1,0
In this example we are working with just 2 GPUs, but of course the same would apply to as many GPUs as your computer has.
Also if you do set this environment variable it’s the best to set it in your ~/.bashrc file or some other startup config file and forget about it.
Trainer Integrations
The Trainer has been extended to support libraries that may dramatically improve your training time and fit much bigger models.
Currently it supports third party solutions, DeepSpeed and PyTorch FSDP, which implement parts of the paper ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He.
This provided support is new and experimental as of this writing. While the support for DeepSpeed and PyTorch FSDP is active and we welcome issues around it, we don’t support the FairScale integration anymore since it has been integrated in PyTorch main (see the PyTorch FSDP integration)
CUDA Extension Installation Notes
As of this writing, Deepspeed require compilation of CUDA C++ code, before it can be used.
While all installation issues should be dealt with through the corresponding GitHub Issues of Deepspeed, there are a few common issues that one may encounter while building any PyTorch extension that needs to build CUDA extensions.
Therefore, if you encounter a CUDA-related build issue while doing the following:
please, read the following notes first.
In these notes we give examples for what to do when pytorch has been built with CUDA 10.2. If your situation is different remember to adjust the version number to the one you are after.
Possible problem #1
While, Pytorch comes with its own CUDA toolkit, to build these two projects you must have an identical version of CUDA installed system-wide.
For example, if you installed pytorch with cudatoolkit==10.2 in the Python environment, you also need to have CUDA 10.2 installed system-wide.
The exact location may vary from system to system, but /usr/local/cuda-10.2 is the most common location on many Unix systems. When CUDA is correctly set up and added to the PATH environment variable, one can find the installation location by doing:
If you don’t have CUDA installed system-wide, install it first. You will find the instructions by using your favorite search engine. For example, if you’re on Ubuntu you may want to search for: ubuntu cuda 10.2 install.
Possible problem #2
Another possible common problem is that you may have more than one CUDA toolkit installed system-wide. For example you may have:
/usr/local/cuda-10.2
/usr/local/cuda-11.0
Now, in this situation you need to make sure that your PATH and LD_LIBRARY_PATH environment variables contain the correct paths to the desired CUDA version. Typically, package installers will set these to contain whatever the last version was installed. If you encounter the problem, where the package build fails because it can’t find the right CUDA version despite you having it installed system-wide, it means that you need to adjust the 2 aforementioned environment variables.
First, you may look at their contents:
echo $PATH
echo $LD_LIBRARY_PATH
so you get an idea of what is inside.
It’s possible that LD_LIBRARY_PATH is empty.
PATH lists the locations of where executables can be found and LD_LIBRARY_PATH is for where shared libraries are to looked for. In both cases, earlier entries have priority over the later ones. : is used to separate multiple entries.
Now, to tell the build program where to find the specific CUDA toolkit, insert the desired paths to be listed first by doing:
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
Note that we aren’t overwriting the existing values, but prepending instead.
Of course, adjust the version number, the full path if need be. Check that the directories you assign actually do exist. lib64 sub-directory is where the various CUDA .so objects, like libcudart.so reside, it’s unlikely that your system will have it named differently, but if it is adjust it to reflect your reality.
Possible problem #3
Some older CUDA versions may refuse to build with newer compilers. For example, you my have gcc-9 but it wants gcc-7.
There are various ways to go about it.
If you can install the latest CUDA toolkit it typically should support the newer compiler.
Alternatively, you could install the lower version of the compiler in addition to the one you already have, or you may already have it but it’s not the default one, so the build system can’t see it. If you have gcc-7 installed but the build system complains it can’t find it, the following might do the trick:
sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc
sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++
Here, we are making a symlink to gcc-7 from /usr/local/cuda-10.2/bin/gcc and since /usr/local/cuda-10.2/bin/ should be in the PATH environment variable (see the previous problem’s solution), it should find gcc-7 (and g++7) and then the build will succeed.
As always make sure to edit the paths in the example to match your situation.
PyTorch Fully Sharded Data parallel
To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model. This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters. To read more about it and the benefits, check out the Fully Sharded Data Parallel blog. We have integrated the latest PyTorch’s Fully Sharded Data Parallel (FSDP) training feature. All you need to do is enable it through the config.
Required PyTorch version for FSDP support: PyTorch Nightly (or 1.12.0 if you read this after it has been released) as the model saving with FSDP activated is only available with recent fixes.
Usage:
Make sure you have added the distributed launcher -m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE if you haven’t been using it already.
Sharding Strategy:
FULL_SHARD : Shards optimizer states + gradients + model parameters across data parallel workers/GPUs. For this, add --fsdp full_shard to the command line arguments.
SHARD_GRAD_OP : Shards optimizer states + gradients across data parallel workers/GPUs. For this, add --fsdp shard_grad_op to the command line arguments.
NO_SHARD : No sharding. For this, add --fsdp no_shard to the command line arguments.
To offload the parameters and gradients to the CPU, add --fsdp "full_shard offload" or --fsdp "shard_grad_op offload" to the command line arguments.
To automatically recursively wrap layers with FSDP using default_auto_wrap_policy, add --fsdp "full_shard auto_wrap" or --fsdp "shard_grad_op auto_wrap" to the command line arguments.
To enable both CPU offloading and auto wrapping, add --fsdp "full_shard offload auto_wrap" or --fsdp "shard_grad_op offload auto_wrap" to the command line arguments.
Remaining FSDP config is passed via --fsdp_config <path_to_fsdp_config.json>. It is either a location of FSDP json config file (e.g., fsdp_config.json) or an already loaded json file as dict.
If auto wrapping is enabled, you can either use transformer based auto wrap policy or size based auto wrap policy.
For transformer based auto wrap policy, it is recommended to specify fsdp_transformer_layer_cls_to_wrap in the config file. If not specified, the default value is model._no_split_modules when available. This specifies the list of transformer layer class name (case-sensitive) to wrap ,e.g, BertLayer, GPTJBlock, T5Block … This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer based models.
For size based auto wrap policy, please add fsdp_min_num_params in the config file. It specifies FSDP’s minimum number of parameters for auto wrapping.
fsdp_backward_prefetch can be specified in the config file. It controls when to prefetch next set of parameters. backward_pre and backward_pos are available options. For more information refer torch.distributed.fsdp.fully_sharded_data_parallel.BackwardPrefetch
fsdp_forward_prefetch can be specified in the config file. It controls when to prefetch next set of parameters. If "True", FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass.
limit_all_gathers can be specified in the config file. If "True", FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers.
activation_checkpointing can be specified in the config file. If "True", FSDP activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage.
Few caveats to be aware of
it is incompatible with generate, thus is incompatible with --predict_with_generate in all seq2seq/clm scripts (translation/summarization/clm etc.).
Please refer issue #21667
PyTorch/XLA Fully Sharded Data parallel
For all the TPU users, great news! PyTorch/XLA now supports FSDP. All the latest Fully Sharded Data Parallel (FSDP) training are supported. For more information refer to the Scaling PyTorch models on Cloud TPUs with FSDP and PyTorch/XLA implementation of FSDP All you need to do is enable it through the config.
Required PyTorch/XLA version for FSDP support: >=2.0
Usage:
Pass --fsdp "full shard" along with following changes to be made in --fsdp_config <path_to_fsdp_config.json>:
xla should be set to True to enable PyTorch/XLA FSDP.
xla_fsdp_settings The value is a dictionary which stores the XLA FSDP wrapping parameters. For a complete list of options, please see here.
xla_fsdp_grad_ckpt. When True, uses gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through fsdp_min_num_params or fsdp_transformer_layer_cls_to_wrap.
You can either use transformer based auto wrap policy or size based auto wrap policy.
For transformer based auto wrap policy, it is recommended to specify fsdp_transformer_layer_cls_to_wrap in the config file. If not specified, the default value is model._no_split_modules when available. This specifies the list of transformer layer class name (case-sensitive) to wrap ,e.g, BertLayer, GPTJBlock, T5Block … This is important because submodules that share weights (e.g., embedding layer) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer based models.
For size based auto wrap policy, please add fsdp_min_num_params in the config file. It specifies FSDP’s minimum number of parameters for auto wrapping.
Using Trainer for accelerated PyTorch Training on Mac
With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Apple’s Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new "mps" device. This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. For more information please refer official documents Introducing Accelerated PyTorch Training on Mac and MPS BACKEND.
We strongly recommend to install PyTorch >= 1.13 (nightly version at the time of writing) on your MacOS machine. It has major fixes related to model correctness and performance improvements for transformer based models. Please refer to https://github.com/pytorch/pytorch/issues/82707 for more details.
Benefits of Training and Inference using Apple Silicon Chips
Enables users to train larger networks or batch sizes locally
Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture. Therefore, improving end-to-end performance.
Reduces costs associated with cloud-based development or the need for additional local GPUs.
Pre-requisites: To install torch with mps support, please follow this nice medium article GPU-Acceleration Comes to PyTorch on M1 Macs.
Usage: mps device will be used by default if available similar to the way cuda device is used. Therefore, no action from user is required. For example, you can run the official Glue text classififcation task (from the root folder) using Apple Silicon GPU with below command:
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
A few caveats to be aware of
Some PyTorch operations have not been implemented in mps and will throw an error. One way to get around that is to set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1, which will fallback to CPU for these operations. It still throws a UserWarning however.
Distributed setups gloo and nccl are not working with mps device. This means that currently only single GPU of mps device type can be used.
Finally, please, remember that, 🤗 Trainer only integrates MPS backend, therefore if you have any problems or questions with regards to MPS backend usage, please, file an issue with PyTorch GitHub.
Using Accelerate Launcher with Trainer
Accelerate now powers Trainer. In terms of what users should expect:
They can keep using the Trainer ingterations such as FSDP, DeepSpeed vis trainer arguments without any changes on their part.
They can now use Accelerate Launcher with Trainer (recommended).
Steps to use Accelerate Launcher with Trainer:
Make sure 🤗 Accelerate is installed, you can’t use the Trainer without it anyway. If not pip install accelerate. You may also need to update your version of Accelerate: pip install accelerate --upgrade
Run accelerate config and fill the questionnaire. Below are example accelerate configs: a. DDP Multi-node Multi-GPU config:
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_process_ip: 192.168.20.1
main_process_port: 9898
main_training_function: main
mixed_precision: fp16
num_machines: 2
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
b. FSDP config:
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
c. DeepSpeed config pointing to a file:
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/user/configs/ds_zero3_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
d. DeepSpeed config using accelerate plugin:
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 0.7
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Run the Trainer script with args other than the ones handled above by accelerate config or launcher args. Below is an example to run run_glue.py using accelerate launcher with FSDP config from above.
cd transformers
accelerate launch \
./examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
You can also directly use the cmd args for accelerate launch. Above example would map to:
cd transformers
accelerate launch --num_processes=2 \
--use_fsdp \
--mixed_precision=bf16 \
--fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \
--fsdp_transformer_layer_cls_to_wrap="BertLayer" \
--fsdp_sharding_strategy=1 \
--fsdp_state_dict_type=FULL_STATE_DICT \
./examples/pytorch/text-classification/run_glue.py
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
For more information, please refer the 🤗 Accelerate CLI guide: Launching your 🤗 Accelerate scripts.
Sections that were moved:
[ DeepSpeed | Installation | Deployment with multiple GPUs | Deployment with one GPU | Deployment in Notebooks | Configuration | Passing Configuration | Shared Configuration | ZeRO | ZeRO-2 Config | ZeRO-3 Config | NVMe Support | ZeRO-2 vs ZeRO-3 Performance | ZeRO-2 Example | ZeRO-3 Example | Optimizer | Scheduler | fp32 Precision | Automatic Mixed Precision | Batch Size | Gradient Accumulation | Gradient Clipping | Getting The Model Weights Out ] |
https://huggingface.co/docs/transformers/main_classes/pipelines | Pipelines
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the task summary for examples of use.
There are two categories of pipeline abstractions to be aware about:
The pipeline() which is the most powerful object encapsulating all other pipelines.
Task-specific pipelines are available for audio, computer vision, natural language processing, and multimodal tasks.
The pipeline abstraction
The pipeline abstraction is a wrapper around all the other available pipelines. It is instantiated as any other pipeline but can provide additional quality of life.
Simple call on one item:
>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
If you want to use a specific model from the hub you can ignore the task if the model on the hub already defines it:
>>> pipe = pipeline(model="roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
To call a pipeline on many items, you can call it with a list.
>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
To iterate over full datasets it is recommended to use a dataset directly. This means you don’t need to allocate the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on GPU. If it doesn’t don’t hesitate to create an issue.
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
For ease of use, a generator is also possible:
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
yield "This is a test"
for out in pipe(data()):
print(out)
transformers.pipeline
< source >
( task: str = None model: typing.Union[str, ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), NoneType] = None config: typing.Union[str, transformers.configuration_utils.PretrainedConfig, NoneType] = None tokenizer: typing.Union[str, transformers.tokenization_utils.PreTrainedTokenizer, ForwardRef('PreTrainedTokenizerFast'), NoneType] = None feature_extractor: typing.Union[str, ForwardRef('SequenceFeatureExtractor'), NoneType] = None image_processor: typing.Union[str, transformers.image_processing_utils.BaseImageProcessor, NoneType] = None framework: typing.Optional[str] = None revision: typing.Optional[str] = None use_fast: bool = True token: typing.Union[bool, str, NoneType] = None device: typing.Union[int, str, ForwardRef('torch.device'), NoneType] = None device_map = None torch_dtype = None trust_remote_code: typing.Optional[bool] = None model_kwargs: typing.Dict[str, typing.Any] = None pipeline_class: typing.Optional[typing.Any] = None **kwargs ) → Pipeline
Parameters
task (str) — The task defining which pipeline will be returned. Currently accepted tasks are:
"audio-classification": will return a AudioClassificationPipeline.
"automatic-speech-recognition": will return a AutomaticSpeechRecognitionPipeline.
"conversational": will return a ConversationalPipeline.
"depth-estimation": will return a DepthEstimationPipeline.
"document-question-answering": will return a DocumentQuestionAnsweringPipeline.
"feature-extraction": will return a FeatureExtractionPipeline.
"fill-mask": will return a FillMaskPipeline:.
"image-classification": will return a ImageClassificationPipeline.
"image-segmentation": will return a ImageSegmentationPipeline.
"image-to-image": will return a ImageToImagePipeline.
"image-to-text": will return a ImageToTextPipeline.
"mask-generation": will return a MaskGenerationPipeline.
"object-detection": will return a ObjectDetectionPipeline.
"question-answering": will return a QuestionAnsweringPipeline.
"summarization": will return a SummarizationPipeline.
"table-question-answering": will return a TableQuestionAnsweringPipeline.
"text2text-generation": will return a Text2TextGenerationPipeline.
"text-classification" (alias "sentiment-analysis" available): will return a TextClassificationPipeline.
"text-generation": will return a TextGenerationPipeline:.
"text-to-audio" (alias "text-to-speech" available): will return a TextToAudioPipeline:.
"token-classification" (alias "ner" available): will return a TokenClassificationPipeline.
"translation": will return a TranslationPipeline.
"translation_xx_to_yy": will return a TranslationPipeline.
"video-classification": will return a VideoClassificationPipeline.
"visual-question-answering": will return a VisualQuestionAnsweringPipeline.
"zero-shot-classification": will return a ZeroShotClassificationPipeline.
"zero-shot-image-classification": will return a ZeroShotImageClassificationPipeline.
"zero-shot-audio-classification": will return a ZeroShotAudioClassificationPipeline.
"zero-shot-object-detection": will return a ZeroShotObjectDetectionPipeline.
model (str or PreTrainedModel or TFPreTrainedModel, optional) — The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model inheriting from PreTrainedModel (for PyTorch) or TFPreTrainedModel (for TensorFlow).
If not provided, the default for the task will be loaded.
config (str or PretrainedConfig, optional) — The configuration that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from PretrainedConfig.
If not provided, the default configuration file for the requested model will be used. That means that if model is given, its default configuration will be used. However, if model is not supplied, this task’s default model’s config is used instead.
tokenizer (str or PreTrainedTokenizer, optional) — The tokenizer that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained tokenizer inheriting from PreTrainedTokenizer.
If not provided, the default tokenizer for the given model will be loaded (if it is a string). If model is not specified or not a string, then the default tokenizer for config is loaded (if it is a string). However, if config is also not given or not a string, then the default tokenizer for the given task will be loaded.
feature_extractor (str or PreTrainedFeatureExtractor, optional) — The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor inheriting from PreTrainedFeatureExtractor.
Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal models. Multi-modal models will also require a tokenizer to be passed.
If not provided, the default feature extractor for the given model will be loaded (if it is a string). If model is not specified or not a string, then the default feature extractor for config is loaded (if it is a string). However, if config is also not given or not a string, then the default feature extractor for the given task will be loaded.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
revision (str, optional, defaults to "main") — When passing a task name or a string model identifier: The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
use_fast (bool, optional, defaults to True) — Whether or not to use a Fast tokenizer if possible (a PreTrainedTokenizerFast).
use_auth_token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
device (int or str or torch.device) — Defines the device (e.g., "cpu", "cuda:1", "mps", or a GPU ordinal rank like 1) on which this pipeline will be allocated.
device_map (str or Dict[str, Union[int, str, torch.device], optional) — Sent directly as model_kwargs (just a simpler shortcut). When accelerate library is present, set device_map="auto" to compute the most optimized device_map automatically (see here for more information).
Do not use device_map AND device at the same time as they will conflict
torch_dtype (str or torch.dtype, optional) — Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, … or "auto").
trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, tokenization or even pipeline files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
model_kwargs (Dict[str, Any], optional) — Additional dictionary of keyword arguments passed along to the model’s from_pretrained(..., **model_kwargs) function.
kwargs (Dict[str, Any], optional) — Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values).
A suitable pipeline for the task.
Utility factory method to build a Pipeline.
Pipelines are made of:
A tokenizer in charge of mapping raw textual input to token.
A model to make predictions from the inputs.
Some (optional) post processing for enhancing model’s output.
Examples:
>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
>>>
>>> analyzer = pipeline("sentiment-analysis")
>>>
>>> oracle = pipeline(
... "question-answering", model="distilbert-base-cased-distilled-squad", tokenizer="bert-base-cased"
... )
>>>
>>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> recognizer = pipeline("ner", model=model, tokenizer=tokenizer)
Pipeline batching
All pipelines can use batching. This will work whenever the pipeline uses its streaming ability (so when passing lists or Dataset or generator).
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending on hardware, data and the actual model being used.
Example where it’s mostly a speedup:
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
# On GTX 970 ------------------------------
Streaming no batching
100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------ Streaming batch_size=8 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s] ------------------------------
Streaming batch_size=64 100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s] ------------------------------ Streaming batch_size=256
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
Example where it’s most a slowdown:
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
This is a occasional very long sentence compared to the other. In that case, the whole batch will need to be 400 tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on bigger batches, the program simply crashes.
Streaming no batching
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s]
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s]
Streaming batch_size=64
100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s]
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of thumb:
For users, a rule of thumb is:
Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the only way to go.
If you are latency constrained (live product doing inference), don’t batch
If you are using CPU, don’t batch.
If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
If you have no clue about the size of the sequence_length (“natural” data), by default don’t batch, measure and try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don’t control the sequence_length.)
If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push it until you get OOMs.
The larger the GPU the more likely batching is going to be more interesting
As soon as you enable batching, make sure you can handle OOMs nicely.
Pipeline chunk batching
zero-shot-classification and question-answering are slightly specific in the sense, that a single input might yield multiple forward pass of a model. Under normal circumstances, this would yield issues with batch_size argument.
In order to circumvent this issue, both of these pipelines are a bit specific, they are ChunkPipeline instead of regular Pipeline. In short:
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
Now becomes:
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
This should be very transparent to your code because the pipelines are used in the same way.
This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don’t have to care about how many forward passes you inputs are actually going to trigger, you can optimize the batch_size independently of the inputs. The caveats from the previous section still apply.
Pipeline custom code
If you want to override a specific pipeline.
Don’t hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most cases, so transformers could maybe support your use case.
If you want to try simply you can:
Subclass your pipeline of choice
class MyPipeline(TextClassificationPipeline):
def postprocess():
scores = scores * 100
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
That should enable you to do all the custom code you want.
Implementing a pipeline
Implementing a new pipeline
Audio
Pipelines available for audio tasks include the following.
AudioClassificationPipeline
class transformers.AudioClassificationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Audio classification pipeline using any AutoModelForAudioClassification. This pipeline predicts the class of a raw waveform or an audio file. In case of an audio file, ffmpeg should be installed to support multiple audio formats.
Example:
>>> from transformers import pipeline
>>> classifier = pipeline(model="superb/wav2vec2-base-superb-ks")
>>> classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
[{'score': 0.997, 'label': '_unknown_'}, {'score': 0.002, 'label': 'left'}, {'score': 0.0, 'label': 'yes'}, {'score': 0.0, 'label': 'down'}, {'score': 0.0, 'label': 'stop'}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This pipeline can currently be loaded from pipeline() using the following task identifier: "audio-classification".
See the list of available models on huggingface.co/models.
__call__
< source >
( inputs: typing.Union[numpy.ndarray, bytes, str] **kwargs ) → A list of dict with the following keys
Parameters
inputs (np.ndarray or bytes or str or dict) — The inputs is either :
str that is the filename of the audio file, the file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system.
bytes it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.
(np.ndarray of shape (n, ) of type np.float32 or np.float64) Raw audio at the correct sampling rate (no further check will be done)
dict form can be used to pass raw audio sampled at arbitrary sampling_rate and let this pipeline do the resampling. The dict must be either be in the format {"sampling_rate": int, "raw": np.array}, or {"sampling_rate": int, "array": np.array}, where the key "raw" or "array" is used to denote the raw audio waveform.
top_k (int, optional, defaults to None) — The number of top labels that will be returned by the pipeline. If the provided number is None or higher than the number of labels available in the model configuration, it will default to the number of labels.
Returns
A list of dict with the following keys
label (str) — The label predicted.
score (float) — The corresponding probability.
Classify the sequence(s) given as inputs. See the AutomaticSpeechRecognitionPipeline documentation for more information.
AutomaticSpeechRecognitionPipeline
class transformers.AutomaticSpeechRecognitionPipeline
< source >
( model: PreTrainedModel feature_extractor: typing.Union[ForwardRef('SequenceFeatureExtractor'), str] = None tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None decoder: typing.Union[ForwardRef('BeamSearchDecoderCTC'), str, NoneType] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
feature_extractor (SequenceFeatureExtractor) — The feature extractor that will be used by the pipeline to encode waveform for the model.
chunk_length_s (float, optional, defaults to 0) — The input length for in each chunk. If chunk_length_s = 0 then chunking is disabled (default).
For more information on how to effectively use chunk_length_s, please have a look at the ASR chunking blog post.
stride_length_s (float, optional, defaults to chunk_length_s / 6) — The length of stride on the left and right of each chunk. Used only with chunk_length_s > 0. This enables the model to see more context and infer letters better than without this context but the pipeline discards the stride bits at the end to make the final reconstitution as perfect as possible.
For more information on how to effectively use stride_length_s, please have a look at the ASR chunking blog post.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed. If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
device (Union[int, torch.device], optional) — Device ordinal for CPU/GPU supports. Setting this to None will leverage CPU, a positive will run the model on the associated CUDA device id.
decoder (pyctcdecode.BeamSearchDecoderCTC, optional) — PyCTCDecode’s BeamSearchDecoderCTC can be passed for language model boosted decoding. See Wav2Vec2ProcessorWithLM for more information.
Pipeline that aims at extracting spoken text contained within some audio.
The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for to support multiple audio formats
Example:
>>> from transformers import pipeline
>>> transcriber = pipeline(model="openai/whisper-base")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
{'text': ' He hoped there would be stew for dinner, turnips and carrots and bruised potatoes and fat mutton pieces to be ladled out in thick, peppered flour-fatten sauce.'}
Learn more about the basics of using a pipeline in the pipeline tutorial
__call__
< source >
( inputs: typing.Union[numpy.ndarray, bytes, str] **kwargs ) → Dict
Parameters
inputs (np.ndarray or bytes or str or dict) — The inputs is either :
str that is either the filename of a local audio file, or a public URL address to download the audio file. The file will be read at the correct sampling rate to get the waveform using ffmpeg. This requires ffmpeg to be installed on the system.
bytes it is supposed to be the content of an audio file and is interpreted by ffmpeg in the same way.
(np.ndarray of shape (n, ) of type np.float32 or np.float64) Raw audio at the correct sampling rate (no further check will be done)
dict form can be used to pass raw audio sampled at arbitrary sampling_rate and let this pipeline do the resampling. The dict must be in the format {"sampling_rate": int, "raw": np.array} with optionally a "stride": (left: int, right: int) than can ask the pipeline to treat the first left samples and last right samples to be ignored in decoding (but used at inference to provide more context to the model). Only use stride with CTC models.
return_timestamps (optional, str or bool) — Only available for pure CTC models (Wav2Vec2, HuBERT, etc) and the Whisper model. Not available for other sequence-to-sequence models.
For CTC models, timestamps can take one of two formats:
"char": the pipeline will return timestamps along the text for every character in the text. For instance, if you get [{"text": "h", "timestamp": (0.5, 0.6)}, {"text": "i", "timestamp": (0.7, 0.9)}], then it means the model predicts that the letter “h” was spoken after 0.5 and before 0.6 seconds.
"word": the pipeline will return timestamps along the text for every word in the text. For instance, if you get [{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp": (1.0, 1.5)}], then it means the model predicts that the word “hi” was spoken after 0.5 and before 0.9 seconds.
For the Whisper model, timestamps can take one of two formats:
"word": same as above for word-level CTC timestamps. Word-level timestamps are predicted through the dynamic-time warping (DTW) algorithm, an approximation to word-level timestamps by inspecting the cross-attention weights.
True: the pipeline will return timestamps along the text for segments of words in the text. For instance, if you get [{"text": " Hi there!", "timestamp": (0.5, 1.5)}], then it means the model predicts that the segment “Hi there!” was spoken after 0.5 and before 1.5 seconds. Note that a segment of text refers to a sequence of one or more words, rather than individual words as with word-level timestamps.
generate_kwargs (dict, optional) — The dictionary of ad-hoc parametrization of generate_config to be used for the generation call. For a complete overview of generate, check the following guide.
max_new_tokens (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
A dictionary with the following keys:
text (str): The recognized text.
chunks (optional(, List[Dict]) When using return_timestamps, the chunks will become a list containing all the various text chunks identified by the model, e.g.* [{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp": (1.0, 1.5)}]. The original full text can roughly be recovered by doing "".join(chunk["text"] for chunk in output["chunks"]).
Transcribe the audio sequence(s) given as inputs to text. See the AutomaticSpeechRecognitionPipeline documentation for more information.
TextToAudioPipeline
class transformers.TextToAudioPipeline
< source >
( *args vocoder = None sampling_rate = None **kwargs )
Text-to-audio generation pipeline using any AutoModelForTextToWaveform or AutoModelForTextToSpectrogram. This pipeline generates an audio file from an input text and optional other conditional inputs.
Example:
>>> from transformers import pipeline
>>> pipe = pipeline(model="suno/bark-small")
>>> output = pipe("Hey it's HuggingFace on the phone!")
>>> audio = output["audio"]
>>> sampling_rate = output["sampling_rate"]
Learn more about the basics of using a pipeline in the pipeline tutorial
This pipeline can currently be loaded from pipeline() using the following task identifiers: "text-to-speech" or "text-to-audio".
See the list of available models on huggingface.co/models.
__call__
< source >
( text_inputs: typing.Union[str, typing.List[str]] **forward_params ) → A dict or a list of dict
Parameters
text_inputs (str or List[str]) — The text(s) to generate.
forward_params (optional) — Parameters passed to the model generation/forward method.
Returns
A dict or a list of dict
The dictionaries have two keys:
audio (np.ndarray of shape (nb_channels, audio_length)) — The generated audio waveform.
sampling_rate (int) — The sampling rate of the generated audio waveform.
Generates speech/audio from the inputs. See the TextToAudioPipeline documentation for more information.
ZeroShotAudioClassificationPipeline
class transformers.ZeroShotAudioClassificationPipeline
< source >
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Zero shot audio classification pipeline using ClapModel. This pipeline predicts the class of an audio when you provide an audio and a set of candidate_labels.
Example:
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> dataset = load_dataset("ashraq/esc50")
>>> audio = next(iter(dataset["train"]["audio"]))["array"]
>>> classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
>>> classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
[{'score': 0.9996, 'label': 'Sound of a dog'}, {'score': 0.0004, 'label': 'Sound of vaccum cleaner'}]
Learn more about the basics of using a pipeline in the pipeline tutorial This audio classification pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-audio-classification". See the list of available models on huggingface.co/models.
__call__
< source >
( audios: typing.Union[numpy.ndarray, bytes, str] **kwargs )
Parameters
audios (str, List[str], np.array or List[np.array]) — The pipeline handles three types of inputs:
A string containing a http link pointing to an audio
A string containing a local path to an audio
An audio loaded in numpy
candidate_labels (List[str]) — The candidate labels for this audio
hypothesis_template (str, optional, defaults to "This is a sound of {}") — The sentence used in cunjunction with candidate_labels to attempt the audio classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_audio
Assign labels to the audio(s) passed as inputs.
Computer vision
Pipelines available for computer vision tasks include the following.
DepthEstimationPipeline
class transformers.DepthEstimationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Depth estimation pipeline using any AutoModelForDepthEstimation. This pipeline predicts the depth of an image.
Example:
>>> from transformers import pipeline
>>> depth_estimator = pipeline(task="depth-estimation", model="Intel/dpt-large")
>>> output = depth_estimator("http://images.cocodataset.org/val2017/000000039769.jpg")
>>>
>>> output["predicted_depth"].shape
torch.Size([1, 384, 384])
Learn more about the basics of using a pipeline in the pipeline tutorial
This depth estimation pipeline can currently be loaded from pipeline() using the following task identifier: "depth-estimation".
See the list of available models on huggingface.co/models.
__call__
< source >
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]] **kwargs )
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Assign labels to the image(s) passed as inputs.
ImageClassificationPipeline
class transformers.ImageClassificationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image classification pipeline using any AutoModelForImageClassification. This pipeline predicts the class of an image.
Example:
>>> from transformers import pipeline
>>> classifier = pipeline(model="microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> classifier("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.442, 'label': 'macaw'}, {'score': 0.088, 'label': 'popinjay'}, {'score': 0.075, 'label': 'parrot'}, {'score': 0.073, 'label': 'parodist, lampooner'}, {'score': 0.046, 'label': 'poll, poll_parrot'}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This image classification pipeline can currently be loaded from pipeline() using the following task identifier: "image-classification".
See the list of available models on huggingface.co/models.
__call__
< source >
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]] **kwargs )
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Assign labels to the image(s) passed as inputs.
ImageSegmentationPipeline
class transformers.ImageSegmentationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image segmentation pipeline using any AutoModelForXXXSegmentation. This pipeline predicts masks of objects and their classes.
Example:
>>> from transformers import pipeline
>>> segmenter = pipeline(model="facebook/detr-resnet-50-panoptic")
>>> segments = segmenter("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
>>> len(segments)
2
>>> segments[0]["label"]
'bird'
>>> segments[1]["label"]
'bird'
>>> type(segments[0]["mask"])
<class 'PIL.Image.Image'>
>>> segments[0]["mask"].size
(768, 512)
This image segmentation pipeline can currently be loaded from pipeline() using the following task identifier: "image-segmentation".
See the list of available models on huggingface.co/models.
__call__
< source >
( images **kwargs )
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing an HTTP(S) link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the same format: all as HTTP(S) links, all as local paths, or all as PIL images.
subtask (str, optional) — Segmentation task to be performed, choose [semantic, instance and panoptic] depending on model capabilities. If not set, the pipeline will attempt tp resolve in the following order: panoptic, instance, semantic.
threshold (float, optional, defaults to 0.9) — Probability threshold to filter out predicted masks.
mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.5) — Mask overlap threshold to eliminate small, disconnected segments.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Perform segmentation (detect masks & classes) in the image(s) passed as inputs.
ImageToImagePipeline
class transformers.ImageToImagePipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image to Image pipeline using any AutoModelForImageToImage. This pipeline generates an image based on a previous image input.
Example:
>>> from PIL import Image
>>> import requests
>>> from transformers import pipeline
>>> upscaler = pipeline("image-to-image", model="caidas/swin2SR-classical-sr-x2-64")
>>> img = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
>>> img = img.resize((64, 64))
>>> upscaled_img = upscaler(img)
>>> img.size
(64, 64)
>>> upscaled_img.size
(144, 144)
This image to image pipeline can currently be loaded from pipeline() using the following task identifier: "image-to-image".
See the list of available models on huggingface.co/models.
__call__
< source >
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]] **kwargs )
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL images.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and the call may block forever.
Transform the image(s) passed as inputs.
ObjectDetectionPipeline
class transformers.ObjectDetectionPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Object detection pipeline using any AutoModelForObjectDetection. This pipeline predicts bounding boxes of objects and their classes.
Example:
>>> from transformers import pipeline
>>> detector = pipeline(model="facebook/detr-resnet-50")
>>> detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.997, 'label': 'bird', 'box': {'xmin': 69, 'ymin': 171, 'xmax': 396, 'ymax': 507}}, {'score': 0.999, 'label': 'bird', 'box': {'xmin': 398, 'ymin': 105, 'xmax': 767, 'ymax': 507}}]
>>>
Learn more about the basics of using a pipeline in the pipeline tutorial
This object detection pipeline can currently be loaded from pipeline() using the following task identifier: "object-detection".
See the list of available models on huggingface.co/models.
__call__
< source >
( *args **kwargs )
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing an HTTP(S) link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the same format: all as HTTP(S) links, all as local paths, or all as PIL images.
threshold (float, optional, defaults to 0.9) — The probability necessary to make a prediction.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
VideoClassificationPipeline
class transformers.VideoClassificationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Video classification pipeline using any AutoModelForVideoClassification. This pipeline predicts the class of a video.
This video classification pipeline can currently be loaded from pipeline() using the following task identifier: "video-classification".
See the list of available models on huggingface.co/models.
__call__
< source >
( videos: typing.Union[str, typing.List[str]] **kwargs )
Parameters
videos (str, List[str]) — The pipeline handles three types of videos:
A string containing a http link pointing to a video
A string containing a local path to a video
The pipeline accepts either a single video or a batch of videos, which must then be passed as a string. Videos in a batch must all be in the same format: all as http links or all as local paths.
top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
num_frames (int, optional, defaults to self.model.config.num_frames) — The number of frames sampled from the video to run the classification on. If not provided, will default to the number of frames specified in the model configuration.
frame_sampling_rate (int, optional, defaults to 1) — The sampling rate used to select frames from the video. If not provided, will default to 1, i.e. every frame will be used.
Assign labels to the video(s) passed as inputs.
ZeroShotImageClassificationPipeline
class transformers.ZeroShotImageClassificationPipeline
< source >
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Zero shot image classification pipeline using CLIPModel. This pipeline predicts the class of an image when you provide an image and a set of candidate_labels.
Example:
>>> from transformers import pipeline
>>> classifier = pipeline(model="openai/clip-vit-large-patch14")
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["animals", "humans", "landscape"],
... )
[{'score': 0.965, 'label': 'animals'}, {'score': 0.03, 'label': 'humans'}, {'score': 0.005, 'label': 'landscape'}]
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["black and white", "photorealist", "painting"],
... )
[{'score': 0.996, 'label': 'black and white'}, {'score': 0.003, 'label': 'photorealist'}, {'score': 0.0, 'label': 'painting'}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This image classification pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-image-classification".
See the list of available models on huggingface.co/models.
__call__
< source >
( images: typing.Union[str, typing.List[str], ForwardRef('Image'), typing.List[ForwardRef('Image')]] **kwargs )
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
candidate_labels (List[str]) — The candidate labels for this image
hypothesis_template (str, optional, defaults to "This is a photo of {}") — The sentence used in cunjunction with candidate_labels to attempt the image classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_image
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Assign labels to the image(s) passed as inputs.
ZeroShotObjectDetectionPipeline
class transformers.ZeroShotObjectDetectionPipeline
< source >
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Zero shot object detection pipeline using OwlViTForObjectDetection. This pipeline predicts bounding boxes of objects when you provide an image and a set of candidate_labels.
Example:
>>> from transformers import pipeline
>>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... candidate_labels=["cat", "couch"],
... )
[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}]
>>> detector(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["head", "bird"],
... )
[{'score': 0.119, 'label': 'bird', 'box': {'xmin': 71, 'ymin': 170, 'xmax': 410, 'ymax': 508}}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This object detection pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-object-detection".
See the list of available models on huggingface.co/models.
__call__
< source >
( image: typing.Union[str, ForwardRef('Image.Image'), typing.List[typing.Dict[str, typing.Any]]] candidate_labels: typing.Union[str, typing.List[str]] = None **kwargs )
Parameters
image (str, PIL.Image or List[Dict[str, Any]]) — The pipeline handles three types of images:
A string containing an http url pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
You can use this parameter to send directly a list of images, or a dataset or a generator like so:
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Natural Language Processing
Pipelines available for natural language processing tasks include the following.
ConversationalPipeline
class transformers.Conversation
< source >
( messages: typing.Union[str, typing.List[typing.Dict[str, str]]] = None conversation_id: UUID = None **deprecated_kwargs )
Parameters
messages (Union[str, List[Dict[str, str]]], optional) — The initial messages to start the conversation, either a string, or a list of dicts containing “role” and “content” keys. If a string is passed, it is interpreted as a single message with the “user” role.
conversation_id (uuid.UUID, optional) — Unique identifier for the conversation. If not provided, a random UUID4 id will be assigned to the conversation.
Utility class containing a conversation and its history. This class is meant to be used as an input to the ConversationalPipeline. The conversation contains several utility functions to manage the addition of new user inputs and generated model responses.
Usage:
conversation = Conversation("Going to the movies tonight - any suggestions?")
conversation.add_message({"role": "assistant", "content": "The Big lebowski."})
conversation.add_message({"role": "user", "content": "Is it good?"})
add_user_input
< source >
( text: str overwrite: bool = False )
Add a user input to the conversation for the next round. This is a legacy method that assumes that inputs must alternate user/assistant/user/assistant, and so will not add multiple user messages in succession. We recommend just using add_message with role “user” instead.
This is a legacy method. We recommend just using add_message with an appropriate role instead.
This is a legacy method that no longer has any effect, as the Conversation no longer distinguishes between processed and unprocessed user input.
class transformers.ConversationalPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
min_length_for_response (int, optional, defaults to 32) — The minimum length (in number of tokens) for a response.
minimum_tokens (int, optional, defaults to 10) — The minimum length of tokens to leave for a response.
Multi-turn conversational pipeline.
Example:
>>> from transformers import pipeline, Conversation
>>> chatbot = pipeline(model="microsoft/DialoGPT-medium")
>>> conversation = Conversation("Going to the movies tonight - any suggestions?")
>>> conversation = chatbot(conversation)
>>> conversation.generated_responses[-1]
'The Big Lebowski'
>>> conversation.add_user_input("Is it an action movie?")
>>> conversation = chatbot(conversation)
>>> conversation.generated_responses[-1]
"It's a comedy."
Learn more about the basics of using a pipeline in the pipeline tutorial
This conversational pipeline can currently be loaded from pipeline() using the following task identifier: "conversational".
The models that this pipeline can use are models that have been fine-tuned on a multi-turn conversational task, currently: ‘microsoft/DialoGPT-small’, ‘microsoft/DialoGPT-medium’, ‘microsoft/DialoGPT-large’. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( conversations: typing.Union[transformers.pipelines.conversational.Conversation, typing.List[transformers.pipelines.conversational.Conversation]] num_workers = 0 **kwargs ) → Conversation or a list of Conversation
Parameters
conversations (a Conversation or a list of Conversation) — Conversations to generate responses for.
clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Conversation(s) with updated generated responses for those containing a new user input.
Generate responses for the conversation(s) given as inputs.
FillMaskPipeline
class transformers.FillMaskPipeline
< source >
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None image_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
top_k (int, defaults to 5) — The number of predictions to return.
targets (str or List[str], optional) — When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).
Masked language modeling prediction pipeline using any ModelWithLMHead. See the masked language modeling examples for more information.
Example:
>>> from transformers import pipeline
>>> fill_masker = pipeline(model="bert-base-uncased")
>>> fill_masker("This is a simple [MASK].")
[{'score': 0.042, 'token': 3291, 'token_str': 'problem', 'sequence': 'this is a simple problem.'}, {'score': 0.031, 'token': 3160, 'token_str': 'question', 'sequence': 'this is a simple question.'}, {'score': 0.03, 'token': 8522, 'token_str': 'equation', 'sequence': 'this is a simple equation.'}, {'score': 0.027, 'token': 2028, 'token_str': 'one', 'sequence': 'this is a simple one.'}, {'score': 0.024, 'token': 3627, 'token_str': 'rule', 'sequence': 'this is a simple rule.'}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This mask filling pipeline can currently be loaded from pipeline() using the following task identifier: "fill-mask".
The models that this pipeline can use are models that have been trained with a masked language modeling objective, which includes the bi-directional models in the library. See the up-to-date list of available models on huggingface.co/models.
This pipeline only works for inputs with exactly one token masked. Experimental: We added support for multiple masks. The returned values are raw model output, and correspond to disjoint probabilities where one might expect joint probabilities (See discussion).
This pipeline now supports tokenizer_kwargs. For example try:
>>> from transformers import pipeline
>>> fill_masker = pipeline(model="bert-base-uncased")
>>> tokenizer_kwargs = {"truncation": True}
>>> fill_masker(
... "This is a simple [MASK]. " + "...with a large amount of repeated text appended. " * 100,
... tokenizer_kwargs=tokenizer_kwargs,
... )
__call__
< source >
( inputs *args **kwargs ) → A list or a list of list of dict
Parameters
args (str or List[str]) — One or several texts (or one list of prompts) with masked tokens.
targets (str or List[str], optional) — When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).
top_k (int, optional) — When passed, overrides the number of predictions to return.
Returns
A list or a list of list of dict
Each result comes as list of dictionaries with the following keys:
sequence (str) — The corresponding input with the mask token prediction.
score (float) — The corresponding probability.
token (int) — The predicted token id (to replace the masked one).
token_str (str) — The predicted token (to replace the masked one).
Fill the masked token in the text(s) given as inputs.
NerPipeline
class transformers.TokenClassificationPipeline
< source >
( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f6751bbca00> *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
ignore_labels (List[str], defaults to ["O"]) — A list of labels to ignore.
grouped_entities (bool, optional, defaults to False) — DEPRECATED, use aggregation_strategy instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.
stride (int, optional) — If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers and aggregation_strategy different from NONE. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward by tokenizer.model_max_length - stride tokens each step.
aggregation_strategy (str, optional, defaults to "none") — The strategy to fuse (or not) tokens based on the model prediction.
“none” : Will simply not do any aggregation and simply return raw results from the model
“simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
“first” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.
“average” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.
“max” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.
Named Entity Recognition pipeline using any ModelForTokenClassification. See the named entity recognition examples for more information.
Example:
>>> from transformers import pipeline
>>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}]
>>> token = tokens[0]
>>>
>>> sentence[token["start"] : token["end"]]
' jean-baptiste'
>>>
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London")
[{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This token recognition pipeline can currently be loaded from pipeline() using the following task identifier: "ner" (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models.
aggregate_words
< source >
( entities: typing.List[dict] aggregation_strategy: AggregationStrategy )
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT
gather_pre_entities
< source >
( sentence: str input_ids: ndarray scores: ndarray offset_mapping: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] special_tokens_mask: ndarray aggregation_strategy: AggregationStrategy )
Fuse various numpy arrays into dicts with all the information needed for aggregation
group_entities
< source >
( entities: typing.List[dict] )
Parameters
entities (dict) — The entities predicted by the pipeline.
Find and group together the adjacent tokens with the same entity predicted.
group_sub_entities
< source >
( entities: typing.List[dict] )
Parameters
entities (dict) — The entities predicted by the pipeline.
Group together the adjacent tokens with the same entity predicted.
See TokenClassificationPipeline for all details.
QuestionAnsweringPipeline
class transformers.QuestionAnsweringPipeline
< source >
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: PreTrainedTokenizer modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Question Answering pipeline using any ModelForQuestionAnswering. See the question answering examples for more information.
Example:
>>> from transformers import pipeline
>>> oracle = pipeline(model="deepset/roberta-base-squad2")
>>> oracle(question="Where do I live?", context="My name is Wolfgang and I live in Berlin")
{'score': 0.9191, 'start': 34, 'end': 40, 'answer': 'Berlin'}
Learn more about the basics of using a pipeline in the pipeline tutorial
This question answering pipeline can currently be loaded from pipeline() using the following task identifier: "question-answering".
The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( *args **kwargs ) → A dict or a list of dict
Parameters
args (SquadExample or a list of SquadExample) — One or several SquadExample containing the question and context.
X (SquadExample or a list of SquadExample, optional) — One or several SquadExample containing the question and context (will be treated the same way as if passed as the first positional argument).
data (SquadExample or a list of SquadExample, optional) — One or several SquadExample containing the question and context (will be treated the same way as if passed as the first positional argument).
question (str or List[str]) — One or several question(s) (must be used in conjunction with the context argument).
context (str or List[str]) — One or several context(s) associated with the question(s) (must be used in conjunction with the question argument).
topk (int, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context.
doc_stride (int, optional, defaults to 128) — If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.
max_answer_len (int, optional, defaults to 15) — The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
max_seq_len (int, optional, defaults to 384) — The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (using doc_stride as overlap) if needed.
max_question_len (int, optional, defaults to 64) — The maximum length of the question after tokenization. It will be truncated if needed.
handle_impossible_answer (bool, optional, defaults to False) — Whether or not we accept impossible as an answer.
align_to_words (bool, optional, defaults to True) — Attempts to align the answer to real words. Improves quality on space separated langages. Might hurt on non-space-separated languages (like Japanese or Chinese)
Returns
A dict or a list of dict
Each result comes as a dictionary with the following keys:
score (float) — The probability associated to the answer.
start (int) — The character start index of the answer (in the tokenized version of the input).
end (int) — The character end index of the answer (in the tokenized version of the input).
answer (str) — The answer to the question.
Answer the question(s) given as inputs by using the context(s).
create_sample
< source >
( question: typing.Union[str, typing.List[str]] context: typing.Union[str, typing.List[str]] ) → One or a list of SquadExample
Parameters
question (str or List[str]) — The question(s) asked.
context (str or List[str]) — The context(s) in which we will look for the answer.
Returns
One or a list of SquadExample
The corresponding SquadExample grouping question and context.
QuestionAnsweringPipeline leverages the SquadExample internally. This helper method encapsulate all the logic for converting question(s) and context(s) to SquadExample.
We currently support extractive question answering.
span_to_answer
< source >
( text: str start: int end: int ) → Dictionary like `{‘answer’
Parameters
text (str) — The actual context to extract the answer from.
start (int) — The answer starting token index.
end (int) — The answer end token index.
Returns
Dictionary like `{‘answer’
str, ‘start’: int, ‘end’: int}`
When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
SummarizationPipeline
class transformers.SummarizationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Summarize news articles and other documents.
This summarizing pipeline can currently be loaded from pipeline() using the following task identifier: "summarization".
The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is currently, ’bart-large-cnn’, ’t5-small’, ’t5-base’, ’t5-large’, ’t5-3b’, ’t5-11b’. See the up-to-date list of available models on huggingface.co/models. For a list of available parameters, see the following documentation
Usage:
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
__call__
< source >
( *args **kwargs ) → A list or a list of list of dict
Parameters
documents (str or List[str]) — One or several articles (or one list of articles) to summarize.
return_text (bool, optional, defaults to True) — Whether or not to include the decoded texts in the outputs
return_tensors (bool, optional, defaults to False) — Whether or not to include the tensors of predictions (as token indices) in the outputs.
clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following keys:
summary_text (str, present when return_text=True) — The summary of the corresponding input.
summary_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the summary.
Summarize the text(s) given as inputs.
TableQuestionAnsweringPipeline
class transformers.TableQuestionAnsweringPipeline
< source >
( args_parser = <transformers.pipelines.table_question_answering.TableQuestionAnsweringArgumentHandler object at 0x7f6751c96a90> *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Table Question Answering pipeline using a ModelForTableQuestionAnswering. This pipeline is only available in PyTorch.
Example:
>>> from transformers import pipeline
>>> oracle = pipeline(model="google/tapas-base-finetuned-wtq")
>>> table = {
... "Repository": ["Transformers", "Datasets", "Tokenizers"],
... "Stars": ["36542", "4512", "3934"],
... "Contributors": ["651", "77", "34"],
... "Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
... }
>>> oracle(query="How many stars does the transformers repository have?", table=table)
{'answer': 'AVERAGE > 36542', 'coordinates': [(0, 1)], 'cells': ['36542'], 'aggregator': 'AVERAGE'}
Learn more about the basics of using a pipeline in the pipeline tutorial
This tabular question answering pipeline can currently be loaded from pipeline() using the following task identifier: "table-question-answering".
The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( *args **kwargs ) → A dictionary or a list of dictionaries containing results
Parameters
table (pd.DataFrame or Dict) — Pandas DataFrame or dictionary that will be converted to a DataFrame containing all the table values. See above for an example of dictionary.
query (str or List[str]) — Query or list of queries that will be sent to the model alongside the table.
sequential (bool, optional, defaults to False) — Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the inference to be done sequentially to extract relations within sequences, given their conversational nature.
padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or TapasTruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values:
True or 'drop_rows_to_fit': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate row by row, removing rows from the table.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
Returns
A dictionary or a list of dictionaries containing results
Each result is a dictionary with the following keys:
answer (str) — The answer of the query given the table. If there is an aggregator, the answer will be preceded by AGGREGATOR >.
coordinates (List[Tuple[int, int]]) — Coordinates of the cells of the answers.
cells (List[str]) — List of strings made up of the answer cell values.
aggregator (str) — If the model has an aggregator, this returns the aggregator.
Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
pipeline(table, query)
pipeline(table, [query])
pipeline(table=table, query=query)
pipeline(table=table, query=[query])
pipeline({"table": table, "query": query})
pipeline({"table": table, "query": [query]})
pipeline([{"table": table, "query": query}, {"table": table, "query": query}])
The table argument should be a dict or a DataFrame built from that dict, containing the whole table:
Example:
data = {
"actors": ["brad pitt", "leonardo di caprio", "george clooney"],
"age": ["56", "45", "59"],
"number of movies": ["87", "53", "69"],
"date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"],
}
This dictionary can be passed in as such, or can be converted to a pandas DataFrame:
Example:
import pandas as pd
table = pd.DataFrame.from_dict(data)
TextClassificationPipeline
class transformers.TextClassificationPipeline
< source >
( **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
return_all_scores (bool, optional, defaults to False) — Whether to return all prediction scores or just the one of the predicted class.
function_to_apply (str, optional, defaults to "default") — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:
"default": if the model has a single label, will apply the sigmoid function on the output. If the model has several labels, will apply the softmax function on the output.
"sigmoid": Applies the sigmoid function on the output.
"softmax": Applies the softmax function on the output.
"none": Does not apply any function on the output.
Text classification pipeline using any ModelForSequenceClassification. See the sequence classification examples for more information.
Example:
>>> from transformers import pipeline
>>> classifier = pipeline(model="distilbert-base-uncased-finetuned-sst-2-english")
>>> classifier("This movie is disgustingly good !")
[{'label': 'POSITIVE', 'score': 1.0}]
>>> classifier("Director tried too much.")
[{'label': 'NEGATIVE', 'score': 0.996}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This text classification pipeline can currently be loaded from pipeline() using the following task identifier: "sentiment-analysis" (for classifying sequences according to positive or negative sentiments).
If multiple classification labels are available (model.config.num_labels >= 2), the pipeline will run a softmax over the results. If there is a single label, the pipeline will run a sigmoid over the result.
The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( *args **kwargs ) → A list or a list of list of dict
Parameters
args (str or List[str] or Dict[str], or List[Dict[str]]) — One or several texts to classify. In order to use text pairs for your classification, you can send a dictionary containing {"text", "text_pair"} keys, or a list of those.
top_k (int, optional, defaults to 1) — How many results to return.
function_to_apply (str, optional, defaults to "default") — The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:
If this argument is not specified, then it will apply the following functions according to the number of labels:
If the model has a single label, will apply the sigmoid function on the output.
If the model has several labels, will apply the softmax function on the output.
Possible values are:
"sigmoid": Applies the sigmoid function on the output.
"softmax": Applies the softmax function on the output.
"none": Does not apply any function on the output.
Returns
A list or a list of list of dict
Each result comes as list of dictionaries with the following keys:
label (str) — The label predicted.
score (float) — The corresponding probability.
If top_k is used, one such dictionary is returned per label.
Classify the text(s) given as inputs.
TextGenerationPipeline
class transformers.TextGenerationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Language generation pipeline using any ModelWithLMHead. This pipeline predicts the words that will follow a specified text prompt.
Example:
>>> from transformers import pipeline
>>> generator = pipeline(model="gpt2")
>>> generator("I can't believe you did such a ", do_sample=False)
[{'generated_text': "I can't believe you did such a icky thing to me. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I"}]
>>>
>>> outputs = generator("My tart needs some", num_return_sequences=4, return_full_text=False)
Learn more about the basics of using a pipeline in the pipeline tutorial. You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in Text generation strategies and Text generation.
This language generation pipeline can currently be loaded from pipeline() using the following task identifier: "text-generation".
The models that this pipeline can use are models that have been trained with an autoregressive language modeling objective, which includes the uni-directional models in the library (e.g. gpt2). See the list of available models on huggingface.co/models.
__call__
< source >
( text_inputs **kwargs ) → A list or a list of list of dict
Parameters
args (str or List[str]) — One or several prompts (or one list of prompts) to complete.
return_tensors (bool, optional, defaults to False) — Whether or not to return the tensors of predictions (as token indices) in the outputs. If set to True, the decoded text is not returned.
return_text (bool, optional, defaults to True) — Whether or not to return the decoded texts in the outputs.
return_full_text (bool, optional, defaults to True) — If set to False only added text is returned, otherwise the full text is returned. Only meaningful if return_text is set to True.
clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output.
prefix (str, optional) — Prefix added to prompt.
handle_long_generation (str, optional) — By default, this pipelines does not handle long generation (ones that exceed in one form or the other the model maximum length). There is no perfect way to adress this (more info :https://github.com/huggingface/transformers/issues/14033#issuecomment-948385227). This provides common strategies to work around that problem depending on your use case.
None : default strategy where nothing in particular happens
"hole": Truncates left of input, and leaves a gap wide enough to let generation happen (might truncate a lot of the prompt and not suitable when generation exceed the model capacity)
generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Returns one of the following dictionaries (cannot return a combination of both generated_text and generated_token_ids):
generated_text (str, present when return_text=True) — The generated text.
generated_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the generated text.
Complete the prompt(s) given as inputs.
Text2TextGenerationPipeline
class transformers.Text2TextGenerationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Pipeline for text to text generation using seq2seq models.
Example:
>>> from transformers import pipeline
>>> generator = pipeline(model="mrm8488/t5-base-finetuned-question-generation-ap")
>>> generator(
... "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google"
... )
[{'generated_text': 'question: Who created the RuPERTa-base?'}]
Learn more about the basics of using a pipeline in the pipeline tutorial. You can pass text generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about text generation parameters in Text generation strategies and Text generation.
This Text2TextGenerationPipeline pipeline can currently be loaded from pipeline() using the following task identifier: "text2text-generation".
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on huggingface.co/models. For a list of available parameters, see the following documentation
Usage:
text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")
__call__
< source >
( *args **kwargs ) → A list or a list of list of dict
Parameters
args (str or List[str]) — Input text for the encoder.
return_tensors (bool, optional, defaults to False) — Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (bool, optional, defaults to True) — Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output.
truncation (TruncationStrategy, optional, defaults to TruncationStrategy.DO_NOT_TRUNCATE) — The truncation strategy for the tokenization within the pipeline. TruncationStrategy.DO_NOT_TRUNCATE (default) will never truncate, but it is sometimes desirable to truncate the input to fit the model’s max_length instead of throwing an error down the line. generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following keys:
generated_text (str, present when return_text=True) — The generated text.
generated_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the generated text.
Generate the output text(s) using text(s) given as inputs.
check_inputs
< source >
( input_length: int min_length: int max_length: int )
Checks whether there might be something wrong with given input with regard to the model.
TokenClassificationPipeline
class transformers.TokenClassificationPipeline
< source >
( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f6751bbca00> *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
ignore_labels (List[str], defaults to ["O"]) — A list of labels to ignore.
grouped_entities (bool, optional, defaults to False) — DEPRECATED, use aggregation_strategy instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not.
stride (int, optional) — If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers and aggregation_strategy different from NONE. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward by tokenizer.model_max_length - stride tokens each step.
aggregation_strategy (str, optional, defaults to "none") — The strategy to fuse (or not) tokens based on the model prediction.
“none” : Will simply not do any aggregation and simply return raw results from the model
“simple” : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{“word”: ABC, “entity”: “TAG”}, {“word”: “D”, “entity”: “TAG2”}, {“word”: “E”, “entity”: “TAG2”}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{“word”: “Micro”, “entity”: “ENTERPRISE”}, {“word”: “soft”, “entity”: “NAME”}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, “New york” might still be tagged with two different entities.
“first” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity.
“average” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied.
“max” : (works only on word based models) Will use the SIMPLE strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.
Named Entity Recognition pipeline using any ModelForTokenClassification. See the named entity recognition examples for more information.
Example:
>>> from transformers import pipeline
>>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}]
>>> token = tokens[0]
>>>
>>> sentence[token["start"] : token["end"]]
' jean-baptiste'
>>>
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London")
[{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This token recognition pipeline can currently be loaded from pipeline() using the following task identifier: "ner" (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( inputs: typing.Union[str, typing.List[str]] **kwargs ) → A list or a list of list of dict
Parameters
inputs (str or List[str]) — One or several texts (or one list of texts) for token classification.
Returns
A list or a list of list of dict
Each result comes as a list of dictionaries (one for each token in the corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with the following keys:
word (str) — The token/word classified. This is obtained by decoding the selected tokens. If you want to have the exact string in the original sentence, use start and end.
score (float) — The corresponding probability for entity.
entity (str) — The entity predicted for that token/word (it is named entity_group when aggregation_strategy is not "none".
index (int, only present when aggregation_strategy="none") — The index of the corresponding token in the sentence.
start (int, optional) — The index of the start of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer
end (int, optional) — The index of the end of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer
Classify each token of the text(s) given as inputs.
aggregate_words
< source >
( entities: typing.List[dict] aggregation_strategy: AggregationStrategy )
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT
gather_pre_entities
< source >
( sentence: str input_ids: ndarray scores: ndarray offset_mapping: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] special_tokens_mask: ndarray aggregation_strategy: AggregationStrategy )
Fuse various numpy arrays into dicts with all the information needed for aggregation
group_entities
< source >
( entities: typing.List[dict] )
Parameters
entities (dict) — The entities predicted by the pipeline.
Find and group together the adjacent tokens with the same entity predicted.
group_sub_entities
< source >
( entities: typing.List[dict] )
Parameters
entities (dict) — The entities predicted by the pipeline.
Group together the adjacent tokens with the same entity predicted.
TranslationPipeline
class transformers.TranslationPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Translates from one language to another.
This translation pipeline can currently be loaded from pipeline() using the following task identifier: "translation_xx_to_yy".
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the up-to-date list of available models on huggingface.co/models. For a list of available parameters, see the following documentation
Usage:
en_fr_translator = pipeline("translation_en_to_fr")
en_fr_translator("How old are you?")
__call__
< source >
( *args **kwargs ) → A list or a list of list of dict
Parameters
args (str or List[str]) — Texts to be translated.
return_tensors (bool, optional, defaults to False) — Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (bool, optional, defaults to True) — Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (bool, optional, defaults to False) — Whether or not to clean up the potential extra spaces in the text output.
src_lang (str, optional) — The language of the input. Might be required for multilingual models. Will not have any effect for single pair translation models
tgt_lang (str, optional) — The language of the desired output. Might be required for multilingual models. Will not have any effect for single pair translation models generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework here).
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following keys:
translation_text (str, present when return_text=True) — The translation.
translation_token_ids (torch.Tensor or tf.Tensor, present when return_tensors=True) — The token ids of the translation.
Translate the text(s) given as inputs.
ZeroShotClassificationPipeline
class transformers.ZeroShotClassificationPipeline
< source >
( args_parser = <transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler object at 0x7f6751b915b0> *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
NLI-based zero-shot classification pipeline using a ModelForSequenceClassification trained on NLI (natural language inference) tasks. Equivalent of text-classification pipelines, but these models don’t require a hardcoded number of potential classes, they can be chosen at runtime. It usually means it’s slower but it is much more flexible.
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis pair and passed to the pretrained model. Then, the logit for entailment is taken as the logit for the candidate label being valid. Any NLI model can be used, but the id of the entailment label must be included in the model config’s :attr:~transformers.PretrainedConfig.label2id.
Example:
>>> from transformers import pipeline
>>> oracle = pipeline(model="facebook/bart-large-mnli")
>>> oracle(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]}
>>> oracle(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["english", "german"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['english', 'german'], 'scores': [0.814, 0.186]}
Learn more about the basics of using a pipeline in the pipeline tutorial
This NLI pipeline can currently be loaded from pipeline() using the following task identifier: "zero-shot-classification".
The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( sequences: typing.Union[str, typing.List[str]] *args **kwargs ) → A dict or a list of dict
Parameters
sequences (str or List[str]) — The sequence(s) to classify, will be truncated if the model input is too large.
candidate_labels (str or List[str]) — The set of possible class labels to classify each sequence into. Can be a single label, a string of comma-separated labels, or a list of labels.
hypothesis_template (str, optional, defaults to "This example is {}.") — The template used to turn each label into an NLI-style hypothesis. This template must include a {} or similar syntax for the candidate label to be inserted into the template. For example, the default template is "This example is {}." With the candidate label "sports", this would be fed into the model like "<cls> sequence to classify <sep> This example is sports . <sep>". The default template works well in many cases, but it may be worthwhile to experiment with different templates depending on the task setting.
multi_label (bool, optional, defaults to False) — Whether or not multiple candidate labels can be true. If False, the scores are normalized such that the sum of the label likelihoods for each sequence is 1. If True, the labels are considered independent and probabilities are normalized for each candidate by doing a softmax of the entailment score vs. the contradiction score.
Returns
A dict or a list of dict
Each result comes as a dictionary with the following keys:
sequence (str) — The sequence for which this is the output.
labels (List[str]) — The labels sorted by order of likelihood.
scores (List[float]) — The probabilities for each of the labels.
Classify the sequence(s) given as inputs. See the ZeroShotClassificationPipeline documentation for more information.
Multimodal
Pipelines available for multimodal tasks include the following.
DocumentQuestionAnsweringPipeline
class transformers.DocumentQuestionAnsweringPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Document Question Answering pipeline using any AutoModelForDocumentQuestionAnswering. The inputs/outputs are similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR’d words/boxes) as input instead of text context.
Example:
>>> from transformers import pipeline
>>> document_qa = pipeline(model="impira/layoutlm-document-qa")
>>> document_qa(
... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
... question="What is the invoice number?",
... )
[{'score': 0.425, 'answer': 'us-001', 'start': 16, 'end': 16}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This document question answering pipeline can currently be loaded from pipeline() using the following task identifier: "document-question-answering".
The models that this pipeline can use are models that have been fine-tuned on a document question answering task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( image: typing.Union[ForwardRef('Image.Image'), str] question: typing.Optional[str] = None word_boxes: typing.Tuple[str, typing.List[float]] = None **kwargs ) → A dict or a list of dict
Parameters
image (str or PIL.Image) — The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be broadcasted to multiple questions.
question (str) — A question to ask of the document.
word_boxes (List[str, Tuple[float, float, float, float]], optional) — A list of words and bounding boxes (normalized 0->1000). If you provide this optional input, then the pipeline will use these words and boxes instead of running OCR on the image to derive them for models that need them (e.g. LayoutLM). This allows you to reuse OCR’d results across many invocations of the pipeline without having to re-run it each time.
top_k (int, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than top_k answers if there are not enough options available within the context.
doc_stride (int, optional, defaults to 128) — If the words in the document are too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap.
max_answer_len (int, optional, defaults to 15) — The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
max_seq_len (int, optional, defaults to 384) — The maximum length of the total sentence (context + question) in tokens of each chunk passed to the model. The context will be split in several chunks (using doc_stride as overlap) if needed.
max_question_len (int, optional, defaults to 64) — The maximum length of the question after tokenization. It will be truncated if needed.
handle_impossible_answer (bool, optional, defaults to False) — Whether or not we accept impossible as an answer.
lang (str, optional) — Language to use while running OCR. Defaults to english.
tesseract_config (str, optional) — Additional flags to pass to tesseract while running OCR.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Returns
A dict or a list of dict
Each result comes as a dictionary with the following keys:
score (float) — The probability associated to the answer.
start (int) — The start word index of the answer (in the OCR’d version of the input or provided word_boxes).
end (int) — The end word index of the answer (in the OCR’d version of the input or provided word_boxes).
answer (str) — The answer to the question.
words (list[int]) — The index of each word/box pair that is in the answer
Answer the question(s) given as inputs by using the document(s). A document is defined as an image and an optional list of (word, box) tuples which represent the text in the document. If the word_boxes are not provided, it will use the Tesseract OCR engine (if available) to extract the words and boxes automatically for LayoutLM-like models which require them as input. For Donut, no OCR is run.
You can invoke the pipeline several ways:
pipeline(image=image, question=question)
pipeline(image=image, question=question, word_boxes=word_boxes)
pipeline([{"image": image, "question": question}])
pipeline([{"image": image, "question": question, "word_boxes": word_boxes}])
FeatureExtractionPipeline
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None image_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
return_tensors (bool, optional) — If True, returns a tensor according to the specified framework, otherwise returns a list.
task (str, defaults to "") — A task-identifier for the pipeline.
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id.
tokenize_kwargs (dict, optional) — Additional dictionary of keyword arguments passed along to the tokenizer.
Feature extraction pipeline using no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks.
Example:
>>> from transformers import pipeline
>>> extractor = pipeline(model="bert-base-uncased", task="feature-extraction")
>>> result = extractor("This is a simple test.", return_tensors=True)
>>> result.shape
torch.Size([1, 8, 768])
Learn more about the basics of using a pipeline in the pipeline tutorial
This feature extraction pipeline can currently be loaded from pipeline() using the task identifier: "feature-extraction".
All models may be used for this pipeline. See a list of all models, including community-contributed models on huggingface.co/models.
( *args **kwargs ) → A nested list of float
Parameters
args (str or List[str]) — One or several texts (or one list of texts) to get the features of.
The features computed by the model.
Extract the features of the input(s).
ImageToTextPipeline
class transformers.ImageToTextPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Image To Text pipeline using a AutoModelForVision2Seq. This pipeline predicts a caption for a given image.
Example:
>>> from transformers import pipeline
>>> captioner = pipeline(model="ydshieh/vit-gpt2-coco-en")
>>> captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'generated_text': 'two birds are standing next to each other '}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This image to text pipeline can currently be loaded from pipeline() using the following task identifier: “image-to-text”.
See the list of available models on huggingface.co/models.
__call__
< source >
( images: typing.Union[str, typing.List[str], ForwardRef('Image.Image'), typing.List[ForwardRef('Image.Image')]] **kwargs ) → A list or a list of list of dict
Parameters
images (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing a HTTP(s) link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images.
max_new_tokens (int, optional) — The amount of maximum tokens to generate. By default it will use generate default.
generate_kwargs (Dict, optional) — Pass it to send all of these arguments directly to generate allowing full control of this function.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Returns
A list or a list of list of dict
Each result comes as a dictionary with the following key:
generated_text (str) — The generated text.
Assign labels to the image(s) passed as inputs.
VisualQuestionAnsweringPipeline
class transformers.VisualQuestionAnsweringPipeline
< source >
( *args **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
Visual Question Answering pipeline using a AutoModelForVisualQuestionAnswering. This pipeline is currently only available in PyTorch.
Example:
>>> from transformers import pipeline
>>> oracle = pipeline(model="dandelin/vilt-b32-finetuned-vqa")
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/lena.png"
>>> oracle(question="What is she wearing ?", image=image_url)
[{'score': 0.948, 'answer': 'hat'}, {'score': 0.009, 'answer': 'fedora'}, {'score': 0.003, 'answer': 'clothes'}, {'score': 0.003, 'answer': 'sun hat'}, {'score': 0.002, 'answer': 'nothing'}]
>>> oracle(question="What is she wearing ?", image=image_url, top_k=1)
[{'score': 0.948, 'answer': 'hat'}]
>>> oracle(question="Is this a person ?", image=image_url, top_k=1)
[{'score': 0.993, 'answer': 'yes'}]
>>> oracle(question="Is this a man ?", image=image_url, top_k=1)
[{'score': 0.996, 'answer': 'no'}]
Learn more about the basics of using a pipeline in the pipeline tutorial
This visual question answering pipeline can currently be loaded from pipeline() using the following task identifiers: "visual-question-answering", "vqa".
The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See the up-to-date list of available models on huggingface.co/models.
__call__
< source >
( image: typing.Union[ForwardRef('Image.Image'), str] question: str = None **kwargs ) → A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys
Parameters
image (str, List[str], PIL.Image or List[PIL.Image]) — The pipeline handles three types of images:
A string containing a http link pointing to an image
A string containing a local path to an image
An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be broadcasted to multiple questions.
question (str, List[str]) — The question(s) asked. If given a single question, it can be broadcasted to multiple images.
top_k (int, optional, defaults to 5) — The number of top labels that will be returned by the pipeline. If the provided number is higher than the number of labels available in the model configuration, it will default to the number of labels.
timeout (float, optional, defaults to None) — The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and the call may block forever.
Returns
A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys
label (str) — The label identified by the model.
score (int) — The score attributed by the model for that label.
Answers open-ended questions about images. The pipeline accepts several types of inputs which are detailed below:
pipeline(image=image, question=question)
pipeline({"image": image, "question": question})
pipeline([{"image": image, "question": question}])
pipeline([{"image": image, "question": question}, {"image": image, "question": question}])
Parent class: Pipeline
class transformers.Pipeline
< source >
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None image_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Parameters
model (PreTrainedModel or TFPreTrainedModel) — The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
tokenizer (PreTrainedTokenizer) — The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from PreTrainedTokenizer.
modelcard (str or ModelCard, optional) — Model card attributed to the model for this pipeline.
framework (str, optional) — The framework to use, either "pt" for PyTorch or "tf" for TensorFlow. The specified framework must be installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the model, or to PyTorch if no model is provided.
task (str, defaults to "") — A task-identifier for the pipeline.
num_workers (int, optional, defaults to 8) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of workers to be used.
batch_size (int, optional, defaults to 1) — When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of the batch to use, for inference this is not always beneficial, please read Batching with pipelines .
args_parser (ArgumentHandler, optional) — Reference to the object in charge of parsing supplied pipeline parameters.
device (int, optional, defaults to -1) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.
binary_output (bool, optional, defaults to False) — Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.
The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across different pipelines.
Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following operations:
Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument (see below).
Some pipeline, like for instance FeatureExtractionPipeline ('feature-extraction') output large tensor object as nested-lists. In order to avoid dumping such large structure as textual data we provide the binary_output constructor argument. If set to True, the output will be stored in the pickle format.
check_model_type
< source >
( supported_models: typing.Union[typing.List[str], dict] )
Parameters
supported_models (List[str] or dict) — The list of models supported by the pipeline, or a dictionary with model class values.
Check if the model class is in supported by the pipeline.
Context Manager allowing tensor allocation on the user-specified device in framework agnostic way.
Examples:
pipe = pipeline(..., device=0)
with pipe.device_placement():
output = pipe(...)
ensure_tensor_on_device
< source >
( **inputs ) → Dict[str, torch.Tensor]
Parameters
inputs (keyword arguments that should be torch.Tensor, the rest is ignored) — The tensors to place on self.device.
Recursive on lists only. —
Returns
Dict[str, torch.Tensor]
The same as inputs but on the proper device.
Ensure PyTorch tensors are on the specified device.
postprocess
< source >
( model_outputs: ModelOutput **postprocess_parameters: typing.Dict )
Postprocess will receive the raw outputs of the _forward method, generally tensors, and reformat them into something more friendly. Generally it will output a list or a dict or results (containing just strings and numbers).
Scikit / Keras interface to transformers’ pipelines. This method will forward to call().
preprocess
< source >
( input_: typing.Any **preprocess_parameters: typing.Dict )
Preprocess will take the input_ of a specific pipeline and return a dictionary of everything necessary for _forward to run properly. It should contain at least one tensor, but might have arbitrary other items.
save_pretrained
< source >
( save_directory: str safe_serialization: bool = False )
Parameters
save_directory (str) — A path to the directory where to saved. It will be created if it doesn’t exist.
safe_serialization (str) — Whether to save the model using safetensors or the traditional way for PyTorch or Tensorflow
Save the pipeline’s model and tokenizer.
Scikit / Keras interface to transformers’ pipelines. This method will forward to call(). |
https://huggingface.co/docs/transformers/internal/time_series_utils | Time Series Utilities
This page lists all the utility functions and classes that can be used for Time Series based models.
Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes.
Distributional Output
class transformers.time_series_utils.NormalOutput
< source >
( dim: int = 1 )
Normal distribution output class.
class transformers.time_series_utils.StudentTOutput
< source >
( dim: int = 1 )
Student-T distribution output class.
class transformers.time_series_utils.NegativeBinomialOutput
< source >
( dim: int = 1 )
Negative Binomial distribution output class. |
https://huggingface.co/docs/transformers/model_doc/bark | Bark
Overview
Bark is a transformer-based text-to-speech model proposed by Suno AI in suno-ai/bark.
Bark is made of 4 main models:
BarkSemanticModel (also referred to as the ‘text’ model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
BarkCoarseModel (also referred to as the ‘coarse acoustics’ model): a causal autoregressive transformer, that takes as input the results of the BarkSemanticModel model. It aims at predicting the first two audio codebooks necessary for EnCodec.
BarkFineModel (the ‘fine acoustics’ model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings.
having predicted all the codebook channels from the EncodecModel, Bark uses it to decode the output audio array.
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice.
Optimizing Bark
Bark can be optimized with just a few extra lines of code, which significantly reduces its memory footprint and accelerates inference.
Using half-precision
You can speed up inference and reduce memory footprint by 50% simply by loading the model in half-precision.
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
Using 🤗 Better Transformer
Better Transformer is an 🤗 Optimum feature that performs kernel fusion under the hood. You can gain 20% to 30% in speed with zero performance degradation. It only requires one line of code to export the model to 🤗 Better Transformer:
model = model.to_bettertransformer()
Note that 🤗 Optimum must be installed before using this feature. Here’s how to install it.
Using CPU offload
As mentioned above, Bark is made up of 4 sub-models, which are called up sequentially during audio generation. In other words, while one sub-model is in use, the other sub-models are idle.
If you’re using a CUDA device, a simple solution to benefit from an 80% reduction in memory footprint is to offload the GPU’s submodels when they’re idle. This operation is called CPU offloading. You can use it with one line of code.
model.enable_cpu_offload()
Note that 🤗 Accelerate must be installed before using this feature. Here’s how to install it.
Combining optimizaton techniques
You can combine optimization techniques, and use CPU offload, half-precision and 🤗 Better Transformer all at once.
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
model = BetterTransformer.transform(model, keep_original_model=False)
model.enable_cpu_offload()
Find out more on inference optimization techniques here.
Tips
Suno offers a library of voice presets in a number of languages here. These presets are also uploaded in the hub here or here.
>>> from transformers import AutoProcessor, BarkModel
>>> processor = AutoProcessor.from_pretrained("suno/bark")
>>> model = BarkModel.from_pretrained("suno/bark")
>>> voice_preset = "v2/en_speaker_6"
>>> inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects.
>>>
>>> inputs = processor("惊人的!我会说中文")
>>>
>>> inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
>>>
>>> inputs = processor("♪ Hello, my dog is cute ♪")
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
The model can also produce nonverbal communications like laughing, sighing and crying.
>>>
>>> inputs = processor("Hello uh ... [clears throat], my dog is cute [laughter]")
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
To save the audio, simply take the sample rate from the model config and some scipy utility:
>>> from scipy.io.wavfile import write as write_wav
>>>
>>> sample_rate = model.generation_config.sample_rate
>>> write_wav("bark_generation.wav", sample_rate, audio_array)
This model was contributed by Yoach Lacombe (ylacombe) and Sanchit Gandhi (sanchit-gandhi). The original code can be found here.
BarkConfig
class transformers.BarkConfig
< source >
( semantic_config: typing.Dict = None coarse_acoustics_config: typing.Dict = None fine_acoustics_config: typing.Dict = None codec_config: typing.Dict = None initializer_range = 0.02 **kwargs )
Parameters
semantic_config (BarkSemanticConfig, optional) — Configuration of the underlying semantic sub-model.
coarse_acoustics_config (BarkCoarseConfig, optional) — Configuration of the underlying coarse acoustics sub-model.
fine_acoustics_config (BarkFineConfig, optional) — Configuration of the underlying fine acoustics sub-model.
codec_config (AutoConfig, optional) — Configuration of the underlying codec sub-model.
Example —
This is the configuration class to store the configuration of a BarkModel. It is used to instantiate a Bark model according to the specified sub-models configurations, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Bark suno/bark architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
from_sub_model_configs
< source >
( semantic_config: BarkSemanticConfig coarse_acoustics_config: BarkCoarseConfig fine_acoustics_config: BarkFineConfig codec_config: PretrainedConfig **kwargs ) → BarkConfig
An instance of a configuration object
Instantiate a BarkConfig (or a derived class) from bark sub-models configuration.
BarkProcessor
class transformers.BarkProcessor
< source >
( tokenizer speaker_embeddings = None )
Parameters
tokenizer (PreTrainedTokenizer) — An instance of PreTrainedTokenizer.
speaker_embeddings (Dict[Dict[str]], optional, defaults to None) — Optional nested speaker embeddings dictionary. The first level contains voice preset names (e.g "en_speaker_4"). The second level contains "semantic_prompt", "coarse_prompt" and "fine_prompt" embeddings. The values correspond to the path of the corresponding np.ndarray. See here for a list of voice_preset_names.
Constructs a Bark processor which wraps a text tokenizer and optional Bark voice presets into a single processor.
__call__
< source >
( text = None voice_preset = None return_tensors = 'pt' max_length = 256 add_special_tokens = False return_attention_mask = True return_token_type_ids = False **kwargs ) → Tuple(BatchEncoding, BatchFeature)
Parameters
text (str, List[str], List[List[str]]) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
voice_preset (str, Dict[np.ndarray]) — The voice preset, i.e the speaker embeddings. It can either be a valid voice_preset name, e.g "en_speaker_1", or directly a dictionnary of np.ndarray embeddings for each submodel of Bark. Or it can be a valid file name of a local .npz single voice preset.
return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Acceptable values are:
'pt': Return PyTorch torch.Tensor objects.
'np': Return NumPy np.ndarray objects.
A tuple composed of a BatchEncoding, i.e the output of the tokenizer and a BatchFeature, i.e the voice preset with the right tensors type.
Main method to prepare for the model one or several sequences(s). This method forwards the text and kwargs arguments to the AutoTokenizer’s __call__() to encode the text. The method also proposes a voice preset which is a dictionary of arrays that conditions Bark’s output. kwargs arguments are forwarded to the tokenizer and to cached_file method if voice_preset is a valid filename.
from_pretrained
< source >
( pretrained_processor_name_or_path speaker_embeddings_dict_path = 'speaker_embeddings_path.json' **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — This can be either:
a string, the model id of a pretrained BarkProcessor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a processor saved using the save_pretrained() method, e.g., ./my_model_directory/.
speaker_embeddings_dict_path (str, optional, defaults to "speaker_embeddings_path.json") — The name of the .json file containing the speaker_embeddings dictionnary located in pretrained_model_name_or_path. If None, no speaker_embeddings is loaded. **kwargs — Additional keyword arguments passed along to both ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a Bark processor associated with a pretrained model.
save_pretrained
< source >
( save_directory speaker_embeddings_dict_path = 'speaker_embeddings_path.json' speaker_embeddings_directory = 'speaker_embeddings' push_to_hub: bool = False **kwargs )
Parameters
save_directory (str or os.PathLike) — Directory where the tokenizer files and the speaker embeddings will be saved (directory will be created if it does not exist).
speaker_embeddings_dict_path (str, optional, defaults to "speaker_embeddings_path.json") — The name of the .json file that will contains the speaker_embeddings nested path dictionnary, if it exists, and that will be located in pretrained_model_name_or_path/speaker_embeddings_directory.
speaker_embeddings_directory (str, optional, defaults to "speaker_embeddings/") — The name of the folder in which the speaker_embeddings arrays will be saved.
push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs — Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (tokenizer…) in the specified directory so that it can be reloaded using the from_pretrained() method.
BarkModel
class transformers.BarkModel
< source >
( config )
Parameters
config (BarkConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The full Bark model, a text-to-speech model composed of 4 sub-models:
BarkSemanticModel (also referred to as the ‘text’ model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
BarkCoarseModel (also refered to as the ‘coarse acoustics’ model), also a causal autoregressive transformer, that takes into input the results of the last model. It aims at regressing the first two audio codebooks necessary to encodec.
BarkFineModel (the ‘fine acoustics’ model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings.
having predicted all the codebook channels from the EncodecModel, Bark uses it to decode the output audio array.
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
generate
< source >
( input_ids: typing.Optional[torch.Tensor] = None history_prompt: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None **kwargs ) → torch.LongTensor
Parameters
input_ids (Optional[torch.Tensor] of shape (batch_size, seq_len), optional) — Input ids. Will be truncated up to 256 tokens. Note that the output audios will be as long as the longest generation among the batch.
history_prompt (Optional[Dict[str,torch.Tensor]], optional) — Optional Bark speaker prompt. Note that for now, this model takes only one speaker prompt per batch.
kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments are of two types:
Without a prefix, they will be entered as **kwargs for the generate method of each sub-model.
With a semantic_, coarse_, fine_ prefix, they will be input for the generate method of the semantic, coarse and fine respectively. It has the priority over the keywords without a prefix.
This means you can, for example, specify a generation strategy for all sub-models except one.
Output generated audio.
Generates audio from an input prompt and an additional optional Bark speaker prompt.
Example:
>>> from transformers import AutoProcessor, BarkModel
>>> processor = AutoProcessor.from_pretrained("suno/bark-small")
>>> model = BarkModel.from_pretrained("suno/bark-small")
>>>
>>> voice_preset = "v2/en_speaker_6"
>>> inputs = processor("Hello, my dog is cute, I need him in my life", voice_preset=voice_preset)
>>> audio_array = model.generate(**inputs, semantic_max_new_tokens=100)
>>> audio_array = audio_array.cpu().numpy().squeeze()
enable_cpu_offload
< source >
( gpu_id: typing.Optional[int] = 0 )
Parameters
gpu_id (int, optional, defaults to 0) — GPU id on which the sub-models will be loaded and offloaded.
Offloads all sub-models to CPU using accelerate, reducing memory usage with a low impact on performance. This method moves one whole sub-model at a time to the GPU when it is used, and the sub-model remains in GPU until the next sub-model runs.
BarkSemanticModel
class transformers.BarkSemanticModel
< source >
( config )
Parameters
config (BarkSemanticConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bark semantic (or text) model. It shares the same architecture as the coarse model. It is a GPT-2 like autoregressive model with a language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None input_embeds: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all input_ids of shape (batch_size, sequence_length).
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. Here, due to Bark particularities, if past_key_values is used, input_embeds will be ignored and you have to use input_ids. If past_key_values is not used and use_cache is set to True, input_embeds is used in priority instead of input_ids.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
The BarkCausalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
BarkCoarseModel
class transformers.BarkCoarseModel
< source >
( config )
Parameters
config (BarkCoarseConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bark coarse acoustics model. It shares the same architecture as the semantic (or text) model. It is a GPT-2 like autoregressive model with a language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None input_embeds: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all input_ids of shape (batch_size, sequence_length).
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. Here, due to Bark particularities, if past_key_values is used, input_embeds will be ignored and you have to use input_ids. If past_key_values is not used and use_cache is set to True, input_embeds is used in priority instead of input_ids.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
The BarkCausalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
BarkFineModel
class transformers.BarkFineModel
< source >
( config )
Parameters
config (BarkFineConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bark fine acoustics model. It is a non-causal GPT-like model with config.n_codes_total embedding layers and language modeling heads, one for each codebook. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( codebook_idx: int input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None input_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
codebook_idx (int) — Index of the codebook that will be predicted.
input_ids (torch.LongTensor of shape (batch_size, sequence_length, number_of_codebooks)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Initially, indices of the first two codebooks are obtained from the coarse sub-model. The rest is predicted recursively by attending the previously predicted channels. The model predicts on windows of length 1024.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — NOT IMPLEMENTED YET.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last input_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
The BarkFineModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
BarkCausalModel
class transformers.BarkCausalModel
< source >
( config )
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None input_embeds: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all input_ids of shape (batch_size, sequence_length).
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. Here, due to Bark particularities, if past_key_values is used, input_embeds will be ignored and you have to use input_ids. If past_key_values is not used and use_cache is set to True, input_embeds is used in priority instead of input_ids.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
The BarkCausalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
BarkCoarseConfig
class transformers.BarkCoarseConfig
< source >
( block_size = 1024 input_vocab_size = 10048 output_vocab_size = 10048 num_layers = 12 num_heads = 12 hidden_size = 768 dropout = 0.0 bias = True initializer_range = 0.02 use_cache = True **kwargs )
Parameters
block_size (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (int, optional, defaults to 10_048) — Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BarkCoarseModel. Defaults to 10_048 but should be carefully thought with regards to the chosen sub-model.
output_vocab_size (int, optional, defaults to 10_048) — Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the: output_ids when passing forward a BarkCoarseModel. Defaults to 10_048 but should be carefully thought with regards to the chosen sub-model.
num_layers (int, optional, defaults to 12) — Number of hidden layers in the given sub-model.
num_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (int, optional, defaults to 768) — Dimensionality of the “intermediate” (often named feed-forward) layer in the architecture.
dropout (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (bool, optional, defaults to True) — Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a BarkCoarseModel. It is used to instantiate the model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Bark suno/bark architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import BarkCoarseConfig, BarkCoarseModel
>>>
>>> configuration = BarkCoarseConfig()
>>>
>>> model = BarkCoarseModel(configuration)
>>>
>>> configuration = model.config
BarkFineConfig
class transformers.BarkFineConfig
< source >
( tie_word_embeddings = True n_codes_total = 8 n_codes_given = 1 **kwargs )
Parameters
block_size (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (int, optional, defaults to 10_048) — Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BarkFineModel. Defaults to 10_048 but should be carefully thought with regards to the chosen sub-model.
output_vocab_size (int, optional, defaults to 10_048) — Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the: output_ids when passing forward a BarkFineModel. Defaults to 10_048 but should be carefully thought with regards to the chosen sub-model.
num_layers (int, optional, defaults to 12) — Number of hidden layers in the given sub-model.
num_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (int, optional, defaults to 768) — Dimensionality of the “intermediate” (often named feed-forward) layer in the architecture.
dropout (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (bool, optional, defaults to True) — Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models).
n_codes_total (int, optional, defaults to 8) — The total number of audio codebooks predicted. Used in the fine acoustics sub-model.
n_codes_given (int, optional, defaults to 1) — The number of audio codebooks predicted in the coarse acoustics sub-model. Used in the acoustics sub-models.
This is the configuration class to store the configuration of a BarkFineModel. It is used to instantiate the model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Bark suno/bark architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import BarkFineConfig, BarkFineModel
>>>
>>> configuration = BarkFineConfig()
>>>
>>> model = BarkFineModel(configuration)
>>>
>>> configuration = model.config
BarkSemanticConfig
class transformers.BarkSemanticConfig
< source >
( block_size = 1024 input_vocab_size = 10048 output_vocab_size = 10048 num_layers = 12 num_heads = 12 hidden_size = 768 dropout = 0.0 bias = True initializer_range = 0.02 use_cache = True **kwargs )
Parameters
block_size (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (int, optional, defaults to 10_048) — Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BarkSemanticModel. Defaults to 10_048 but should be carefully thought with regards to the chosen sub-model.
output_vocab_size (int, optional, defaults to 10_048) — Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the: output_ids when passing forward a BarkSemanticModel. Defaults to 10_048 but should be carefully thought with regards to the chosen sub-model.
num_layers (int, optional, defaults to 12) — Number of hidden layers in the given sub-model.
num_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (int, optional, defaults to 768) — Dimensionality of the “intermediate” (often named feed-forward) layer in the architecture.
dropout (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (bool, optional, defaults to True) — Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a BarkSemanticModel. It is used to instantiate the model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Bark suno/bark architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import BarkSemanticConfig, BarkSemanticModel
>>>
>>> configuration = BarkSemanticConfig()
>>>
>>> model = BarkSemanticModel(configuration)
>>>
>>> configuration = model.config |
https://huggingface.co/docs/transformers/model_doc/autoformer | Autoformer
Overview
The Autoformer model was proposed in Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.
The abstract from the paper is the following:
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.
This model was contributed by elisim and kashif. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Autoformer blog-post in HuggingFace blog: Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
AutoformerConfig
class transformers.AutoformerConfig
< source >
( prediction_length: typing.Optional[int] = None context_length: typing.Optional[int] = None distribution_output: str = 'student_t' loss: str = 'nll' input_size: int = 1 lags_sequence: typing.List[int] = [1, 2, 3, 4, 5, 6, 7] scaling: bool = True num_time_features: int = 0 num_dynamic_real_features: int = 0 num_static_categorical_features: int = 0 num_static_real_features: int = 0 cardinality: typing.Optional[typing.List[int]] = None embedding_dimension: typing.Optional[typing.List[int]] = None d_model: int = 64 encoder_attention_heads: int = 2 decoder_attention_heads: int = 2 encoder_layers: int = 2 decoder_layers: int = 2 encoder_ffn_dim: int = 32 decoder_ffn_dim: int = 32 activation_function: str = 'gelu' dropout: float = 0.1 encoder_layerdrop: float = 0.1 decoder_layerdrop: float = 0.1 attention_dropout: float = 0.1 activation_dropout: float = 0.1 num_parallel_samples: int = 100 init_std: float = 0.02 use_cache: bool = True is_encoder_decoder = True label_length: int = 10 moving_average: int = 25 autocorrelation_factor: int = 3 **kwargs )
Parameters
prediction_length (int) — The prediction length for the decoder. In other words, the prediction horizon of the model.
context_length (int, optional, defaults to prediction_length) — The context length for the encoder. If unset, the context length will be the same as the prediction_length.
distribution_output (string, optional, defaults to "student_t") — The distribution emission head for the model. Could be either “student_t”, “normal” or “negative_binomial”.
loss (string, optional, defaults to "nll") — The loss function for the model corresponding to the distribution_output head. For parametric distributions it is the negative log likelihood (nll) - which currently is the only supported one.
input_size (int, optional, defaults to 1) — The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of multivariate targets.
lags_sequence (list[int], optional, defaults to [1, 2, 3, 4, 5, 6, 7]) — The lags of the input time series as covariates often dictated by the frequency. Default is [1, 2, 3, 4, 5, 6, 7].
scaling (bool, optional defaults to True) — Whether to scale the input targets.
num_time_features (int, optional, defaults to 0) — The number of time features in the input time series.
num_dynamic_real_features (int, optional, defaults to 0) — The number of dynamic real valued features.
num_static_categorical_features (int, optional, defaults to 0) — The number of static categorical features.
num_static_real_features (int, optional, defaults to 0) — The number of static real valued features.
cardinality (list[int], optional) — The cardinality (number of different values) for each of the static categorical features. Should be a list of integers, having the same length as num_static_categorical_features. Cannot be None if num_static_categorical_features is > 0.
embedding_dimension (list[int], optional) — The dimension of the embedding for each of the static categorical features. Should be a list of integers, having the same length as num_static_categorical_features. Cannot be None if num_static_categorical_features is > 0.
d_model (int, optional, defaults to 64) — Dimensionality of the transformer layers.
encoder_layers (int, optional, defaults to 2) — Number of encoder layers.
decoder_layers (int, optional, defaults to 2) — Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 2) — Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 2) — Number of attention heads for each attention layer in the Transformer decoder.
encoder_ffn_dim (int, optional, defaults to 32) — Dimension of the “intermediate” (often named feed-forward) layer in encoder.
decoder_ffn_dim (int, optional, defaults to 32) — Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and decoder. If string, "gelu" and "relu" are supported.
dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the encoder, and decoder.
encoder_layerdrop (float, optional, defaults to 0.1) — The dropout probability for the attention and fully connected layers for each encoder layer.
decoder_layerdrop (float, optional, defaults to 0.1) — The dropout probability for the attention and fully connected layers for each decoder layer.
attention_dropout (float, optional, defaults to 0.1) — The dropout probability for the attention probabilities.
activation_dropout (float, optional, defaults to 0.1) — The dropout probability used between the two layers of the feed-forward networks.
num_parallel_samples (int, optional, defaults to 100) — The number of samples to generate in parallel for each time step of inference.
init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated normal weight initialization distribution.
use_cache (bool, optional, defaults to True) — Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
label_length (int, optional, defaults to 10) — Start token length of the Autoformer decoder, which is used for direct multi-step prediction (i.e. non-autoregressive generation).
moving_average (int, defaults to 25) — The window size of the moving average. In practice, it’s the kernel size in AvgPool1d of the Decomposition Layer.
autocorrelation_factor (int, defaults to 3) — “Attention” (i.e. AutoCorrelation mechanism) factor which is used to find top k autocorrelations delays. It’s recommended in the paper to set it to a number between 1 and 5.
This is the configuration class to store the configuration of an AutoformerModel. It is used to instantiate an Autoformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Autoformer huggingface/autoformer-tourism-monthly architecture.
Configuration objects inherit from PretrainedConfig can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import AutoformerConfig, AutoformerModel
>>>
>>> configuration = AutoformerConfig()
>>>
>>> model = AutoformerModel(configuration)
>>>
>>> configuration = model.config
AutoformerModel
class transformers.AutoformerModel
< source >
( config: AutoformerConfig )
Parameters
config (AutoformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Autoformer Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( past_values: Tensor past_time_features: Tensor past_observed_mask: Tensor static_categorical_features: typing.Optional[torch.Tensor] = None static_real_features: typing.Optional[torch.Tensor] = None future_values: typing.Optional[torch.Tensor] = None future_time_features: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None use_cache: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.autoformer.modeling_autoformer.AutoformerModelOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Past values of the time series, that serve as context in order to predict the future. These values may contain lags, i.e. additional values from the past which are added in order to serve as “extra context”. The past_values is what the Transformer encoder gets as input (with optional additional features, such as static_categorical_features, static_real_features, past_time_features).
The sequence length here is equal to context_length + max(config.lags_sequence).
Missing values need to be replaced with zeros.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features), optional) — Optional time features, which the model internally will add to past_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features.
The Autoformer only learns additional embeddings for static_categorical_features.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length), optional) — Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in [0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) — Optional static categorical features for which the model will learn an embedding, which it will add to the values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) — Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length)) — Future values of the time series, that serve as labels for the model. The future_values is what the Transformer needs to learn to output, given the past_values.
See the demo notebook and code snippets for details.
Missing values need to be replaced with zeros.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features), optional) — Optional time features, which the model internally will add to future_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional features.
The Autoformer only learns additional embeddings for static_categorical_features.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional) last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.autoformer.modeling_autoformer.AutoformerModelOutput or tuple(torch.FloatTensor)
A transformers.models.autoformer.modeling_autoformer.AutoformerModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AutoformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
trend (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Trend tensor for each time series.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to rescale back to the original magnitude.
static_features: (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The AutoformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from huggingface_hub import hf_hub_download
>>> import torch
>>> from transformers import AutoformerModel
>>> file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
>>> batch = torch.load(file)
>>> model = AutoformerModel.from_pretrained("huggingface/autoformer-tourism-monthly")
>>>
>>>
>>> outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
>>> last_hidden_state = outputs.last_hidden_state
AutoformerForPrediction
class transformers.AutoformerForPrediction
< source >
( config: AutoformerConfig )
Parameters
config (AutoformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The Autoformer Model with a distribution head on top for time-series forecasting. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( past_values: Tensor past_time_features: Tensor past_observed_mask: Tensor static_categorical_features: typing.Optional[torch.Tensor] = None static_real_features: typing.Optional[torch.Tensor] = None future_values: typing.Optional[torch.Tensor] = None future_time_features: typing.Optional[torch.Tensor] = None future_observed_mask: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None use_cache: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqTSPredictionOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Past values of the time series, that serve as context in order to predict the future. These values may contain lags, i.e. additional values from the past which are added in order to serve as “extra context”. The past_values is what the Transformer encoder gets as input (with optional additional features, such as static_categorical_features, static_real_features, past_time_features).
The sequence length here is equal to context_length + max(config.lags_sequence).
Missing values need to be replaced with zeros.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features), optional) — Optional time features, which the model internally will add to past_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features.
The Autoformer only learns additional embeddings for static_categorical_features.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length), optional) — Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in [0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) — Optional static categorical features for which the model will learn an embedding, which it will add to the values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) — Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length)) — Future values of the time series, that serve as labels for the model. The future_values is what the Transformer needs to learn to output, given the past_values.
See the demo notebook and code snippets for details.
Missing values need to be replaced with zeros.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features), optional) — Optional time features, which the model internally will add to future_values. These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These could also be so-called “age” features, which basically help the model know “at which point in life” a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional features.
The Autoformer only learns additional embeddings for static_categorical_features.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional) last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.Seq2SeqTSPredictionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AutoformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when a future_values is provided) — Distributional loss.
params (torch.FloatTensor of shape (batch_size, num_samples, num_params)) — Parameters of the chosen distribution.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to rescale back to the original magnitude.
static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The AutoformerForPrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from huggingface_hub import hf_hub_download
>>> import torch
>>> from transformers import AutoformerForPrediction
>>> file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
>>> batch = torch.load(file)
>>> model = AutoformerForPrediction.from_pretrained("huggingface/autoformer-tourism-monthly")
>>>
>>>
>>> outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
>>> loss = outputs.loss
>>> loss.backward()
>>>
>>>
>>>
>>> outputs = model.generate(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_time_features=batch["future_time_features"],
... )
>>> mean_prediction = outputs.sequences.mean(dim=1) |
https://huggingface.co/docs/transformers/internal/image_processing_utils | Utilities for Image Processors
This page lists all the utility functions used by the image processors, mainly the functional transformations used to process the images.
Most of those are only useful if you are studying the code of the image processors in the library.
Image Transformations
transformers.image_transforms.center_crop
< source >
( image: ndarray size: typing.Tuple[int, int] data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None return_numpy: typing.Optional[bool] = None ) → np.ndarray
Parameters
image (np.ndarray) — The image to crop.
size (Tuple[int, int]) — The target size for the cropped image.
data_format (str or ChannelDimension, optional) — The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image.
input_data_format (str or ChannelDimension, optional) — The channel dimension format for the input image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image.
return_numpy (bool, optional) — Whether or not to return the cropped image as a numpy array. Used for backwards compatibility with the previous ImageFeatureExtractionMixin method.
Unset: will return the same type as the input image.
True: will return a numpy array.
False: will return a PIL.Image.Image object.
The cropped image.
Crops the image to the specified size using a center crop. Note that if the image is too small to be cropped to the size given, it will be padded (so the returned result will always be of size size).
transformers.image_transforms.center_to_corners_format
< source >
( bboxes_center: TensorType )
Converts bounding boxes from center format to corners format.
center format: contains the coordinate for the center of the box and its width, height dimensions (center_x, center_y, width, height) corners format: contains the coodinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y)
transformers.image_transforms.corners_to_center_format
< source >
( bboxes_corners: TensorType )
Converts bounding boxes from corners format to center format.
corners format: contains the coodinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y) center format: contains the coordinate for the center of the box and its the width, height dimensions (center_x, center_y, width, height)
transformers.image_transforms.id_to_rgb
< source >
( id_map )
Converts unique ID to RGB color.
transformers.image_transforms.normalize
< source >
( image: ndarray mean: typing.Union[float, typing.Iterable[float]] std: typing.Union[float, typing.Iterable[float]] data_format: typing.Optional[transformers.image_utils.ChannelDimension] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None )
Parameters
image (np.ndarray) — The image to normalize.
mean (float or Iterable[float]) — The mean to use for normalization.
std (float or Iterable[float]) — The standard deviation to use for normalization.
data_format (ChannelDimension, optional) — The channel dimension format of the output image. If unset, will use the inferred format from the input.
input_data_format (ChannelDimension, optional) — The channel dimension format of the input image. If unset, will use the inferred format from the input.
Normalizes image using the mean and standard deviation specified by mean and std.
image = (image - mean) / std
transformers.image_transforms.pad
< source >
( image: ndarray padding: typing.Union[int, typing.Tuple[int, int], typing.Iterable[typing.Tuple[int, int]]] mode: PaddingMode = <PaddingMode.CONSTANT: 'constant'> constant_values: typing.Union[float, typing.Iterable[float]] = 0.0 data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None ) → np.ndarray
Parameters
image (np.ndarray) — The image to pad.
padding (int or Tuple[int, int] or Iterable[Tuple[int, int]]) — Padding to apply to the edges of the height, width axes. Can be one of three formats:
((before_height, after_height), (before_width, after_width)) unique pad widths for each axis.
((before, after),) yields same before and after pad for height and width.
(pad,) or int is a shortcut for before = after = pad width for all axes.
mode (PaddingMode) — The padding mode to use. Can be one of:
"constant": pads with a constant value.
"reflect": pads with the reflection of the vector mirrored on the first and last values of the vector along each axis.
"replicate": pads with the replication of the last value on the edge of the array along each axis.
"symmetric": pads with the reflection of the vector mirrored along the edge of the array.
constant_values (float or Iterable[float], optional) — The value to use for the padding if mode is "constant".
data_format (str or ChannelDimension, optional) — The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. If unset, will use same as the input image.
input_data_format (str or ChannelDimension, optional) — The channel dimension format for the input image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. If unset, will use the inferred format of the input image.
The padded image.
Pads the image with the specified (height, width) padding and mode.
transformers.image_transforms.rgb_to_id
< source >
( color )
Converts RGB color to unique ID.
transformers.image_transforms.rescale
< source >
( image: ndarray scale: float data_format: typing.Optional[transformers.image_utils.ChannelDimension] = None dtype: dtype = <class 'numpy.float32'> input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None ) → np.ndarray
Parameters
image (np.ndarray) — The image to rescale.
scale (float) — The scale to use for rescaling the image.
data_format (ChannelDimension, optional) — The channel dimension format of the image. If not provided, it will be the same as the input image.
dtype (np.dtype, optional, defaults to np.float32) — The dtype of the output image. Defaults to np.float32. Used for backwards compatibility with feature extractors.
input_data_format (ChannelDimension, optional) — The channel dimension format of the input image. If not provided, it will be inferred from the input image.
The rescaled image.
Rescales image by scale.
transformers.image_transforms.resize
< source >
( image size: typing.Tuple[int, int] resample: PILImageResampling = None reducing_gap: typing.Optional[int] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = None return_numpy: bool = True input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None ) → np.ndarray
Parameters
image (PIL.Image.Image or np.ndarray or torch.Tensor) — The image to resize.
size (Tuple[int, int]) — The size to use for resizing the image.
resample (int, optional, defaults to PILImageResampling.BILINEAR) — The filter to user for resampling.
reducing_gap (int, optional) — Apply optimization by resizing the image in two steps. The bigger reducing_gap, the closer the result to the fair resampling. See corresponding Pillow documentation for more details.
data_format (ChannelDimension, optional) — The channel dimension format of the output image. If unset, will use the inferred format from the input.
return_numpy (bool, optional, defaults to True) — Whether or not to return the resized image as a numpy array. If False a PIL.Image.Image object is returned.
input_data_format (ChannelDimension, optional) — The channel dimension format of the input image. If unset, will use the inferred format from the input.
The resized image.
Resizes image to (height, width) specified by size using the PIL library.
transformers.image_transforms.to_pil_image
< source >
( image: typing.Union[numpy.ndarray, ForwardRef('PIL.Image.Image'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor'), ForwardRef('jnp.ndarray')] do_rescale: typing.Optional[bool] = None input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None ) → PIL.Image.Image
Parameters
image (PIL.Image.Image or numpy.ndarray or torch.Tensor or tf.Tensor) — The image to convert to the PIL.Image format.
do_rescale (bool, optional) — Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will default to True if the image type is a floating type and casting to int would result in a loss of precision, and False otherwise.
input_data_format (ChannelDimension, optional) — The channel dimension format of the input image. If unset, will use the inferred format from the input.
The converted image.
Converts image to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if needed.
ImageProcessingMixin
class transformers.ImageProcessingMixin
< source >
( **kwargs )
This is an image processor mixin used to provide saving/loading functionality for sequential and image feature extractors.
fetch_images
< source >
( image_url_or_urls: typing.Union[str, typing.List[str]] )
Convert a single or a list of urls into the corresponding PIL.Image objects.
If a single url is passed, the return value will be a single object. If a list is passed a list of objects is returned.
from_dict
< source >
( image_processor_dict: typing.Dict[str, typing.Any] **kwargs ) → ImageProcessingMixin
Parameters
image_processor_dict (Dict[str, Any]) — Dictionary that will be used to instantiate the image processor object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the to_dict() method.
kwargs (Dict[str, Any]) — Additional parameters from which to initialize the image processor object.
The image processor object instantiated from those parameters.
Instantiates a type of ImageProcessingMixin from a Python dictionary of parameters.
from_json_file
< source >
( json_file: typing.Union[str, os.PathLike] ) → A image processor of type ImageProcessingMixin
Parameters
json_file (str or os.PathLike) — Path to the JSON file containing the parameters.
Returns
A image processor of type ImageProcessingMixin
The image_processor object instantiated from that JSON file.
Instantiates a image processor of type ImageProcessingMixin from the path to a JSON file of parameters.
from_pretrained
< source >
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — This can be either:
a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a image processor file saved using the save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved image processor JSON file, e.g., ./my_model_directory/preprocessor_config.json.
cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used.
force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist.
resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.
proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.
Instantiate a type of ImageProcessingMixin from an image processor.
Examples:
image_processor = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32"
)
image_processor = CLIPImageProcessor.from_pretrained(
"./test/saved_model/"
)
image_processor = CLIPImageProcessor.from_pretrained("./test/saved_model/preprocessor_config.json")
image_processor = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32", do_normalize=False, foo=False
)
assert image_processor.do_normalize is False
image_processor, unused_kwargs = CLIPImageProcessor.from_pretrained(
"openai/clip-vit-base-patch32", do_normalize=False, foo=False, return_unused_kwargs=True
)
assert image_processor.do_normalize is False
assert unused_kwargs == {"foo": False}
get_image_processor_dict
< source >
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] **kwargs ) → Tuple[Dict, Dict]
Parameters
pretrained_model_name_or_path (str or os.PathLike) — The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.
subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here.
Returns
Tuple[Dict, Dict]
The dictionary(ies) that will be used to instantiate the image processor object.
From a pretrained_model_name_or_path, resolve to a dictionary of parameters, to be used for instantiating a image processor of type ~image_processor_utils.ImageProcessingMixin using from_dict.
push_to_hub
< source >
( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs )
Parameters
repo_id (str) — The name of the repository you want to push your image processor to. It should contain your organization name when pushing to a given organization.
use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
commit_message (str, optional) — Message to commit while pushing. Will default to "Upload image processor".
private (bool, optional) — Whether or not the repository created should be private.
token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB").
create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization.
revision (str, optional) — Branch to push the uploaded files to.
Upload the image processor file to the 🤗 Model Hub.
Examples:
from transformers import AutoImageProcessor
image processor = AutoImageProcessor.from_pretrained("bert-base-cased")
image processor.push_to_hub("my-finetuned-bert")
image processor.push_to_hub("huggingface/my-finetuned-bert")
register_for_auto_class
< source >
( auto_class = 'AutoImageProcessor' )
Parameters
auto_class (str or type, optional, defaults to "AutoImageProcessor ") — The auto class to register this new image processor with.
Register this class with a given auto class. This should only be used for custom image processors as the ones in the library are already mapped with AutoImageProcessor .
This API is experimental and may have some slight breaking changes in the next releases.
save_pretrained
< source >
( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs )
Parameters
save_directory (str or os.PathLike) — Directory where the image processor JSON file will be saved (will be created if it does not exist).
push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method.
Save an image processor object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method.
to_dict
< source >
( ) → Dict[str, Any]
Dictionary of all the attributes that make up this image processor instance.
Serializes this instance to a Python dictionary.
to_json_file
< source >
( json_file_path: typing.Union[str, os.PathLike] )
Parameters
json_file_path (str or os.PathLike) — Path to the JSON file in which this image_processor instance’s parameters will be saved.
Save this instance to a JSON file.
to_json_string
< source >
( ) → str
String containing all the attributes that make up this feature_extractor instance in JSON format.
Serializes this instance to a JSON string. |
https://huggingface.co/docs/transformers/internal/audio_utils | Utilities for FeatureExtractors
This page lists all the utility functions that can be used by the audio FeatureExtractor in order to compute special features from a raw audio using common algorithms such as Short Time Fourier Transform or log mel spectrogram.
Most of those are only useful if you are studying the code of the audio processors in the library.
Audio Transformations
transformers.audio_utils.hertz_to_mel
< source >
( freq: typing.Union[float, numpy.ndarray] mel_scale: str = 'htk' ) → float or np.ndarray
Parameters
freq (float or np.ndarray) — The frequency, or multiple frequencies, in hertz (Hz).
mel_scale (str, optional, defaults to "htk") — The mel frequency scale to use, "htk", "kaldi" or "slaney".
Returns
float or np.ndarray
The frequencies on the mel scale.
Convert frequency from hertz to mels.
transformers.audio_utils.mel_to_hertz
< source >
( mels: typing.Union[float, numpy.ndarray] mel_scale: str = 'htk' ) → float or np.ndarray
Parameters
mels (float or np.ndarray) — The frequency, or multiple frequencies, in mels.
mel_scale (str, optional, "htk") — The mel frequency scale to use, "htk", "kaldi" or "slaney".
Returns
float or np.ndarray
The frequencies in hertz.
Convert frequency from mels to hertz.
transformers.audio_utils.mel_filter_bank
< source >
( num_frequency_bins: int num_mel_filters: int min_frequency: float max_frequency: float sampling_rate: int norm: typing.Optional[str] = None mel_scale: str = 'htk' triangularize_in_mel_space: bool = False ) → np.ndarray of shape (num_frequency_bins, num_mel_filters)
Parameters
num_frequency_bins (int) — Number of frequencies used to compute the spectrogram (should be the same as in stft).
num_mel_filters (int) — Number of mel filters to generate.
min_frequency (float) — Lowest frequency of interest in Hz.
max_frequency (float) — Highest frequency of interest in Hz. This should not exceed sampling_rate / 2.
sampling_rate (int) — Sample rate of the audio waveform.
norm (str, optional) — If "slaney", divide the triangular mel weights by the width of the mel band (area normalization).
mel_scale (str, optional, defaults to "htk") — The mel frequency scale to use, "htk", "kaldi" or "slaney".
triangularize_in_mel_space (bool, optional, defaults to False) — If this option is enabled, the triangular filter is applied in mel space rather than frequency space. This should be set to true in order to get the same results as torchaudio when computing mel filters.
Returns
np.ndarray of shape (num_frequency_bins, num_mel_filters)
Triangular filter bank matrix. This is a projection matrix to go from a spectrogram to a mel spectrogram.
Creates a frequency bin conversion matrix used to obtain a mel spectrogram. This is called a mel filter bank, and various implementation exist, which differ in the number of filters, the shape of the filters, the way the filters are spaced, the bandwidth of the filters, and the manner in which the spectrum is warped. The goal of these features is to approximate the non-linear human perception of the variation in pitch with respect to the frequency.
Different banks of mel filters were introduced in the literature. The following variations are supported:
MFCC FB-20: introduced in 1980 by Davis and Mermelstein, it assumes a sampling frequency of 10 kHz and a speech bandwidth of [0, 4600] Hz.
MFCC FB-24 HTK: from the Cambridge HMM Toolkit (HTK) (1995) uses a filter bank of 24 filters for a speech bandwidth of [0, 8000] Hz. This assumes sampling rate ≥ 16 kHz.
MFCC FB-40: from the Auditory Toolbox for MATLAB written by Slaney in 1998, assumes a sampling rate of 16 kHz and speech bandwidth of [133, 6854] Hz. This version also includes area normalization.
HFCC-E FB-29 (Human Factor Cepstral Coefficients) of Skowronski and Harris (2004), assumes a sampling rate of 12.5 kHz and speech bandwidth of [0, 6250] Hz.
This code is adapted from torchaudio and librosa. Note that the default parameters of torchaudio’s melscale_fbanks implement the "htk" filters while librosa uses the "slaney" implementation.
transformers.audio_utils.optimal_fft_length
< source >
( window_length: int )
Finds the best FFT input size for a given window_length. This function takes a given window length and, if not already a power of two, rounds it up to the next power or two.
The FFT algorithm works fastest when the length of the input is a power of two, which may be larger than the size of the window or analysis frame. For example, if the window is 400 samples, using an FFT input size of 512 samples is more optimal than an FFT size of 400 samples. Using a larger FFT size does not affect the detected frequencies, it simply gives a higher frequency resolution (i.e. the frequency bins are smaller).
transformers.audio_utils.window_function
< source >
( window_length: int name: str = 'hann' periodic: bool = True frame_length: typing.Optional[int] = None center: bool = True )
Parameters
window_length (int) — The length of the window in samples.
name (str, optional, defaults to "hann") — The name of the window function.
periodic (bool, optional, defaults to True) — Whether the window is periodic or symmetric.
frame_length (int, optional) — The length of the analysis frames in samples. Provide a value for frame_length if the window is smaller than the frame length, so that it will be zero-padded.
center (bool, optional, defaults to True) — Whether to center the window inside the FFT buffer. Only used when frame_length is provided.
Returns an array containing the specified window. This window is intended to be used with stft.
The following window types are supported:
"boxcar": a rectangular window
"hamming": the Hamming window
"hann": the Hann window
"povey": the Povey window
transformers.audio_utils.spectrogram
< source >
( waveform: ndarray window: ndarray frame_length: int hop_length: int fft_length: typing.Optional[int] = None power: typing.Optional[float] = 1.0 center: bool = True pad_mode: str = 'reflect' onesided: bool = True preemphasis: typing.Optional[float] = None mel_filters: typing.Optional[numpy.ndarray] = None mel_floor: float = 1e-10 log_mel: typing.Optional[str] = None reference: float = 1.0 min_value: float = 1e-10 db_range: typing.Optional[float] = None remove_dc_offset: typing.Optional[bool] = None dtype: dtype = <class 'numpy.float32'> )
Parameters
waveform (np.ndarray of shape (length,)) — The input waveform. This must be a single real-valued, mono waveform.
window (np.ndarray of shape (frame_length,)) — The windowing function to apply, including zero-padding if necessary. The actual window length may be shorter than frame_length, but we’re assuming the array has already been zero-padded.
frame_length (int) — The length of the analysis frames in samples. With librosa this is always equal to fft_length but we also allow smaller sizes.
hop_length (int) — The stride between successive analysis frames in samples.
fft_length (int, optional) — The size of the FFT buffer in samples. This determines how many frequency bins the spectrogram will have. For optimal speed, this should be a power of two. If None, uses frame_length.
power (float, optional, defaults to 1.0) — If 1.0, returns the amplitude spectrogram. If 2.0, returns the power spectrogram. If None, returns complex numbers.
center (bool, optional, defaults to True) — Whether to pad the waveform so that frame t is centered around time t * hop_length. If False, frame t will start at time t * hop_length.
pad_mode (str, optional, defaults to "reflect") — Padding mode used when center is True. Possible values are: "constant" (pad with zeros), "edge" (pad with edge values), "reflect" (pads with mirrored values).
onesided (bool, optional, defaults to True) — If True, only computes the positive frequencies and returns a spectrogram containing fft_length // 2 + 1 frequency bins. If False, also computes the negative frequencies and returns fft_length frequency bins.
preemphasis (float, optional) — Coefficient for a low-pass filter that applies pre-emphasis before the DFT.
mel_filters (np.ndarray of shape (num_freq_bins, num_mel_filters), optional) — The mel filter bank. If supplied, applies a this filter bank to create a mel spectrogram.
mel_floor (float, optional, defaults to 1e-10) — Minimum value of mel frequency banks.
log_mel (str, optional) — How to convert the spectrogram to log scale. Possible options are: None (don’t convert), "log" (take the natural logarithm) "log10" (take the base-10 logarithm), "dB" (convert to decibels). Can only be used when power is not None.
reference (float, optional, defaults to 1.0) — Sets the input spectrogram value that corresponds to 0 dB. For example, use np.max(spectrogram) to set the loudest part to 0 dB. Must be greater than zero.
min_value (float, optional, defaults to 1e-10) — The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking log(0). For a power spectrogram, the default of 1e-10 corresponds to a minimum of -100 dB. For an amplitude spectrogram, the value 1e-5 corresponds to -100 dB. Must be greater than zero.
db_range (float, optional) — Sets the maximum dynamic range in decibels. For example, if db_range = 80, the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero.
remove_dc_offset (bool, optional) — Subtract mean from waveform on each frame, applied before pre-emphasis. This should be set to true in order to get the same results as torchaudio.compliance.kaldi.fbank when computing mel filters.
dtype (np.dtype, optional, defaults to np.float32) — Data type of the spectrogram tensor. If power is None, this argument is ignored and the dtype will be np.complex64.
Calculates a spectrogram over one waveform using the Short-Time Fourier Transform.
This function can create the following kinds of spectrograms:
amplitude spectrogram (power = 1.0)
power spectrogram (power = 2.0)
complex-valued spectrogram (power = None)
log spectrogram (use log_mel argument)
mel spectrogram (provide mel_filters)
log-mel spectrogram (provide mel_filters and log_mel)
How this works:
The input waveform is split into frames of size frame_length that are partially overlapping by `frame_length
hop_length` samples.
Each frame is multiplied by the window and placed into a buffer of size fft_length.
The DFT is taken of each windowed frame.
The results are stacked into a spectrogram.
We make a distinction between the following “blocks” of sample data, each of which may have a different lengths:
The analysis frame. This is the size of the time slices that the input waveform is split into.
The window. Each analysis frame is multiplied by the window to avoid spectral leakage.
The FFT input buffer. The length of this determines how many frequency bins are in the spectrogram.
In this implementation, the window is assumed to be zero-padded to have the same size as the analysis frame. A padded window can be obtained from window_function(). The FFT input buffer may be larger than the analysis frame, typically the next power of two.
Note: This function is not optimized for speed yet. It should be mostly compatible with librosa.stft and torchaudio.functional.transforms.Spectrogram, although it is more flexible due to the different ways spectrograms can be constructed.
transformers.audio_utils.power_to_db
< source >
( spectrogram: ndarray reference: float = 1.0 min_value: float = 1e-10 db_range: typing.Optional[float] = None ) → np.ndarray
Parameters
spectrogram (np.ndarray) — The input power (mel) spectrogram. Note that a power spectrogram has the amplitudes squared!
reference (float, optional, defaults to 1.0) — Sets the input spectrogram value that corresponds to 0 dB. For example, use np.max(spectrogram) to set the loudest part to 0 dB. Must be greater than zero.
min_value (float, optional, defaults to 1e-10) — The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking log(0). The default of 1e-10 corresponds to a minimum of -100 dB. Must be greater than zero.
db_range (float, optional) — Sets the maximum dynamic range in decibels. For example, if db_range = 80, the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero.
the spectrogram in decibels
Converts a power spectrogram to the decibel scale. This computes 10 * log10(spectrogram / reference), using basic logarithm properties for numerical stability.
The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. This means that large variations in energy may not sound all that different if the sound is loud to begin with. This compression operation makes the (mel) spectrogram features match more closely what humans actually hear.
Based on the implementation of librosa.power_to_db.
transformers.audio_utils.amplitude_to_db
< source >
( spectrogram: ndarray reference: float = 1.0 min_value: float = 1e-05 db_range: typing.Optional[float] = None ) → np.ndarray
Parameters
spectrogram (np.ndarray) — The input amplitude (mel) spectrogram.
reference (float, optional, defaults to 1.0) — Sets the input spectrogram value that corresponds to 0 dB. For example, use np.max(spectrogram) to set the loudest part to 0 dB. Must be greater than zero.
min_value (float, optional, defaults to 1e-5) — The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking log(0). The default of 1e-5 corresponds to a minimum of -100 dB. Must be greater than zero.
db_range (float, optional) — Sets the maximum dynamic range in decibels. For example, if db_range = 80, the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero.
the spectrogram in decibels
Converts an amplitude spectrogram to the decibel scale. This computes 20 * log10(spectrogram / reference), using basic logarithm properties for numerical stability.
The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. This means that large variations in energy may not sound all that different if the sound is loud to begin with. This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. |
https://huggingface.co/docs/transformers/internal/file_utils | Transformers documentation
General Utilities
General Utilities
This page lists all of Transformers general utility functions that are found in the file utils.py.
Most of those are only useful if you are studying the general code in the library.
Enums and namedtuples
class transformers.utils.ExplicitEnum
< source >
( value names = None module = None qualname = None type = None start = 1 )
Enum with more explicit error message for missing values.
class transformers.utils.PaddingStrategy
< source >
( value names = None module = None qualname = None type = None start = 1 )
Possible values for the padding argument in PreTrainedTokenizerBase.call(). Useful for tab-completion in an IDE.
class transformers.TensorType
< source >
( value names = None module = None qualname = None type = None start = 1 )
Possible values for the return_tensors argument in PreTrainedTokenizerBase.call(). Useful for tab-completion in an IDE.
Special Decorators
transformers.add_start_docstrings
< source >
( *docstr )
transformers.utils.add_start_docstrings_to_model_forward
< source >
( *docstr )
transformers.add_end_docstrings
< source >
( *docstr )
transformers.utils.add_code_sample_docstrings
< source >
( *docstr processor_class = None checkpoint = None output_type = None config_class = None mask = '[MASK]' qa_target_start_index = 14 qa_target_end_index = 15 model_cls = None modality = None expected_output = None expected_loss = None real_checkpoint = None )
transformers.utils.replace_return_docstrings
< source >
( output_type = None config_class = None )
Special Properties
class transformers.utils.cached_property
< source >
( fget = None fset = None fdel = None doc = None )
Descriptor that mimics @property but caches output in member variable.
From tensorflow_datasets
Built-in in functools from Python 3.8.
Other Utilities
class transformers.utils._LazyModule
< source >
( name module_file import_structure module_spec = None extra_objects = None )
Module class that surfaces all objects but only performs associated imports when the objects are requested. |
https://huggingface.co/docs/transformers/model_doc/align | ALIGN
Overview
The ALIGN model was proposed in Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with EfficientNet as its vision encoder and BERT as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe.
The abstract from the paper is the following:
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
Usage
ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score.
AlignProcessor wraps EfficientNetImageProcessor and BertTokenizer into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using AlignProcessor and AlignModel.
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
print(probs)
This model was contributed by Alara Dirik. The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN.
A blog post on ALIGN and the COYO-700M dataset.
A zero-shot image classification demo.
Model card of kakaobrain/align-base model.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
AlignConfig
class transformers.AlignConfig
< source >
( text_config = None vision_config = None projection_dim = 640 temperature_init_value = 1.0 initializer_range = 0.02 **kwargs )
Parameters
text_config (dict, optional) — Dictionary of configuration options used to initialize AlignTextConfig.
vision_config (dict, optional) — Dictionary of configuration options used to initialize AlignVisionConfig.
projection_dim (int, optional, defaults to 640) — Dimentionality of text and vision projection layers.
temperature_init_value (float, optional, defaults to 1.0) — The inital value of the temperature paramter. Default is used as per the original ALIGN implementation.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
kwargs (optional) — Dictionary of keyword arguments.
AlignConfig is the configuration class to store the configuration of a AlignModel. It is used to instantiate a ALIGN model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the ALIGN kakaobrain/align-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import AlignConfig, AlignModel
>>>
>>> configuration = AlignConfig()
>>>
>>> model = AlignModel(configuration)
>>>
>>> configuration = model.config
>>>
>>> from transformers import AlignTextConfig, AlignVisionConfig
>>>
>>> config_text = AlignTextConfig()
>>> config_vision = AlignVisionConfig()
>>> config = AlignConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
< source >
( text_config: AlignTextConfig vision_config: AlignVisionConfig **kwargs ) → AlignConfig
An instance of a configuration object
Instantiate a AlignConfig (or a derived class) from align text model configuration and align vision model configuration.
AlignTextConfig
class transformers.AlignTextConfig
< source >
( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' use_cache = True **kwargs )
Parameters
vocab_size (int, optional, defaults to 30522) — Vocabulary size of the Align Text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling AlignTextModel.
hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling AlignTextModel.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.
pad_token_id (int, optional, defaults to 0) — Padding token id.
This is the configuration class to store the configuration of a AlignTextModel. It is used to instantiate a ALIGN text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the text encoder of the ALIGN kakaobrain/align-base architecture. The default values here are copied from BERT.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import AlignTextConfig, AlignTextModel
>>>
>>> configuration = AlignTextConfig()
>>>
>>> model = AlignTextModel(configuration)
>>>
>>> configuration = model.config
AlignVisionConfig
class transformers.AlignVisionConfig
< source >
( num_channels: int = 3 image_size: int = 600 width_coefficient: float = 2.0 depth_coefficient: float = 3.1 depth_divisor: int = 8 kernel_sizes: typing.List[int] = [3, 3, 5, 3, 5, 5, 3] in_channels: typing.List[int] = [32, 16, 24, 40, 80, 112, 192] out_channels: typing.List[int] = [16, 24, 40, 80, 112, 192, 320] depthwise_padding: typing.List[int] = [] strides: typing.List[int] = [1, 2, 2, 2, 1, 2, 1] num_block_repeats: typing.List[int] = [1, 2, 2, 3, 3, 4, 1] expand_ratios: typing.List[int] = [1, 6, 6, 6, 6, 6, 6] squeeze_expansion_ratio: float = 0.25 hidden_act: str = 'swish' hidden_dim: int = 2560 pooling_type: str = 'mean' initializer_range: float = 0.02 batch_norm_eps: float = 0.001 batch_norm_momentum: float = 0.99 drop_connect_rate: float = 0.2 **kwargs )
Parameters
num_channels (int, optional, defaults to 3) — The number of input channels.
image_size (int, optional, defaults to 600) — The input image size.
width_coefficient (float, optional, defaults to 2.0) — Scaling coefficient for network width at each stage.
depth_coefficient (float, optional, defaults to 3.1) — Scaling coefficient for network depth at each stage.
depth_divisor int, optional, defaults to 8) — A unit of network width.
kernel_sizes (List[int], optional, defaults to [3, 3, 5, 3, 5, 5, 3]) — List of kernel sizes to be used in each block.
in_channels (List[int], optional, defaults to [32, 16, 24, 40, 80, 112, 192]) — List of input channel sizes to be used in each block for convolutional layers.
out_channels (List[int], optional, defaults to [16, 24, 40, 80, 112, 192, 320]) — List of output channel sizes to be used in each block for convolutional layers.
depthwise_padding (List[int], optional, defaults to []) — List of block indices with square padding.
strides (List[int], optional, defaults to [1, 2, 2, 2, 1, 2, 1]) — List of stride sizes to be used in each block for convolutional layers.
num_block_repeats (List[int], optional, defaults to [1, 2, 2, 3, 3, 4, 1]) — List of the number of times each block is to repeated.
expand_ratios (List[int], optional, defaults to [1, 6, 6, 6, 6, 6, 6]) — List of scaling coefficient of each block.
squeeze_expansion_ratio (float, optional, defaults to 0.25) — Squeeze expansion ratio.
hidden_act (str or function, optional, defaults to "silu") — The non-linear activation function (function or string) in each block. If string, "gelu", "relu", "selu", “gelu_new”, “silu”and“mish”` are supported.
hiddem_dim (int, optional, defaults to 1280) — The hidden dimension of the layer before the classification head.
pooling_type (str or function, optional, defaults to "mean") — Type of final pooling to be applied before the dense classification head. Available options are ["mean", "max"]
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
batch_norm_eps (float, optional, defaults to 1e-3) — The epsilon used by the batch normalization layers.
batch_norm_momentum (float, optional, defaults to 0.99) — The momentum used by the batch normalization layers.
drop_connect_rate (float, optional, defaults to 0.2) — The drop rate for skip connections.
This is the configuration class to store the configuration of a AlignVisionModel. It is used to instantiate a ALIGN vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the vision encoder of the ALIGN kakaobrain/align-base architecture. The default values are copied from EfficientNet (efficientnet-b7)
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import AlignVisionConfig, AlignVisionModel
>>>
>>> configuration = AlignVisionConfig()
>>>
>>> model = AlignVisionModel(configuration)
>>>
>>> configuration = model.config
AlignProcessor
class transformers.AlignProcessor
< source >
( image_processor tokenizer )
Parameters
image_processor (EfficientNetImageProcessor) — The image processor is a required input.
tokenizer ([BertTokenizer, BertTokenizerFast]) — The tokenizer is a required input.
Constructs an ALIGN processor which wraps EfficientNetImageProcessor and BertTokenizer/BertTokenizerFast into a single processor that interits both the image processor and tokenizer functionalities. See the __call__() and decode() for more information.
This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
AlignModel
class transformers.AlignModel
< source >
( config: AlignConfig )
Parameters
config (AlignConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional): Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs? token_type_ids (torch.LongTensor of shape ({0}), optional): Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional): Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)): Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See EfficientNetImageProcessor.call() for details. return_loss (bool, optional): Whether or not to return the contrastive loss. output_attentions (bool, optional): Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional): Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional): Whether or not to return a ModelOutput instead of a plain tuple.
The AlignModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
get_text_features
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
token_type_ids (torch.LongTensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by applying the projection layer to the pooled output of AlignTextModel.
The AlignModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoTokenizer, AlignModel
>>> model = AlignModel.from_pretrained("kakaobrain/align-base")
>>> tokenizer = AutoTokenizer.from_pretrained("kakaobrain/align-base")
>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
get_image_features
< source >
( pixel_values: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See EfficientNetImageProcessor.call() for details.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by applying the projection layer to the pooled output of AlignVisionModel.
The AlignModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AlignModel
>>> model = AlignModel.from_pretrained("kakaobrain/align-base")
>>> processor = AutoProcessor.from_pretrained("kakaobrain/align-base")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> image_features = model.get_image_features(**inputs)
AlignTextModel
class transformers.AlignTextModel
< source >
( config: AlignTextConfig add_pooling_layer: bool = True )
Parameters
config (AlignConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The text model from ALIGN without any head or projection on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
token_type_ids (torch.LongTensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.align.configuration_align.AlignTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The AlignTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoTokenizer, AlignTextModel
>>> model = AlignTextModel.from_pretrained("kakaobrain/align-base")
>>> tokenizer = AutoTokenizer.from_pretrained("kakaobrain/align-base")
>>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output
AlignVisionModel
class transformers.AlignVisionModel
< source >
( config: AlignVisionConfig )
Parameters
config (AlignConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The vision model from ALIGN without any head or projection on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( pixel_values: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See EfficientNetImageProcessor.call() for details.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.align.configuration_align.AlignVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The AlignVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AlignVisionModel
>>> model = AlignVisionModel.from_pretrained("kakaobrain/align-base")
>>> processor = AutoProcessor.from_pretrained("kakaobrain/align-base")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output |
https://huggingface.co/docs/transformers/model_doc/altclip | AltCLIP
Overview
The AltCLIP model was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP (Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP’s text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP’s capabilities such as multilingual understanding.
The abstract from the paper is the following:
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.
Usage
The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention and we take the [CLS] token in XLM-R to represent text embedding.
AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The CLIPImageProcessor can be used to resize (or rescale) and normalize images for the model.
The AltCLIPProcessor wraps a CLIPImageProcessor and a XLMRobertaTokenizer into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using AltCLIPProcessor and AltCLIPModel.
>>> from PIL import Image
>>> import requests
>>> from transformers import AltCLIPModel, AltCLIPProcessor
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = logits_per_image.softmax(dim=1)
Tips:
This model is build on CLIPModel, so use it like a original CLIP.
This model was contributed by jongjyh.
AltCLIPConfig
class transformers.AltCLIPConfig
< source >
( text_config = None vision_config = None projection_dim = 768 logit_scale_init_value = 2.6592 **kwargs )
Parameters
text_config (dict, optional) — Dictionary of configuration options used to initialize AltCLIPTextConfig.
vision_config (dict, optional) — Dictionary of configuration options used to initialize AltCLIPVisionConfig.
projection_dim (int, optional, defaults to 512) — Dimentionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLIP implementation.
kwargs (optional) — Dictionary of keyword arguments.
This is the configuration class to store the configuration of a AltCLIPModel. It is used to instantiate an AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AltCLIP BAAI/AltCLIP architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import AltCLIPConfig, AltCLIPModel
>>>
>>> configuration = AltCLIPConfig()
>>>
>>> model = AltCLIPModel(configuration)
>>>
>>> configuration = model.config
>>>
>>>
>>> config_text = AltCLIPTextConfig()
>>> config_vision = AltCLIPVisionConfig()
>>> config = AltCLIPConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
< source >
( text_config: AltCLIPTextConfig vision_config: AltCLIPVisionConfig **kwargs ) → AltCLIPConfig
An instance of a configuration object
Instantiate a AltCLIPConfig (or a derived class) from altclip text model configuration and altclip vision model configuration.
AltCLIPTextConfig
class transformers.AltCLIPTextConfig
< source >
( vocab_size = 250002 hidden_size = 1024 num_hidden_layers = 24 num_attention_heads = 16 intermediate_size = 4096 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 514 type_vocab_size = 1 initializer_range = 0.02 initializer_factor = 0.02 layer_norm_eps = 1e-05 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' use_cache = True project_dim = 768 **kwargs )
Parameters
vocab_size (int, optional, defaults to 250002) — Vocabulary size of the AltCLIP model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling AltCLIPTextModel.
hidden_size (int, optional, defaults to 1024) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 514) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling AltCLIPTextModel
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.
project_dim (int, optional, defaults to 768) — The dimentions of the teacher model before the mapping layer.
This is the configuration class to store the configuration of a AltCLIPTextModel. It is used to instantiate a AltCLIP text model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AltCLIP BAAI/AltCLIP architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import AltCLIPTextModel, AltCLIPTextConfig
>>>
>>> configuration = AltCLIPTextConfig()
>>>
>>> model = AltCLIPTextModel(configuration)
>>>
>>> configuration = model.config
AltCLIPVisionConfig
class transformers.AltCLIPVisionConfig
< source >
( hidden_size = 768 intermediate_size = 3072 projection_dim = 512 num_hidden_layers = 12 num_attention_heads = 12 num_channels = 3 image_size = 224 patch_size = 32 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 **kwargs )
Parameters
hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) — The size (resolution) of each image.
patch_size (int, optional, defaults to 32) — The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing).
This is the configuration class to store the configuration of a AltCLIPModel. It is used to instantiate an AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AltCLIP BAAI/AltCLIP architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import AltCLIPVisionConfig, AltCLIPVisionModel
>>>
>>> configuration = AltCLIPVisionConfig()
>>>
>>> model = AltCLIPVisionModel(configuration)
>>>
>>> configuration = model.config
AltCLIPProcessor
class transformers.AltCLIPProcessor
< source >
( image_processor = None tokenizer = None **kwargs )
Parameters
image_processor (CLIPImageProcessor) — The image processor is a required input.
tokenizer (XLMRobertaTokenizerFast) — The tokenizer is a required input.
Constructs a AltCLIP processor which wraps a CLIP image processor and a XLM-Roberta tokenizer into a single processor.
AltCLIPProcessor offers all the functionalities of CLIPImageProcessor and XLMRobertaTokenizerFast. See the __call__() and decode() for more information.
This method forwards all its arguments to XLMRobertaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to XLMRobertaTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
AltCLIPModel
class transformers.AltCLIPModel
< source >
( config: AltCLIPConfig )
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.Tensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.altclip.modeling_altclip.AltCLIPOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details.
return_loss (bool, optional) — Whether or not to return the contrastive loss.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.altclip.modeling_altclip.AltCLIPOutput or tuple(torch.FloatTensor)
A transformers.models.altclip.modeling_altclip.AltCLIPOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.altclip.configuration_altclip.AltCLIPConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of AltCLIPTextModel.
image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of AltCLIPVisionModel.
text_model_output(BaseModelOutputWithPooling): The output of the AltCLIPTextModel.
vision_model_output(BaseModelOutputWithPooling): The output of the AltCLIPVisionModel.
The AltCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AltCLIPModel
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image
>>> probs = logits_per_image.softmax(dim=1)
get_text_features
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None token_type_ids = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by applying the projection layer to the pooled output of AltCLIPTextModel.
The AltCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoProcessor, AltCLIPModel
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
get_image_features
< source >
( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by applying the projection layer to the pooled output of AltCLIPVisionModel.
The AltCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AltCLIPModel
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> image_features = model.get_image_features(**inputs)
AltCLIPTextModel
class transformers.AltCLIPTextModel
< source >
( config )
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndProjection or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndProjection or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndProjection or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.altclip.configuration_altclip.AltCLIPTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
projection_state (tuple(torch.FloatTensor), returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor of shape (batch_size,config.project_dim).
Text embeddings before the projection layer, used to mimic the last hidden state of the teacher encoder.
The AltCLIPTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoProcessor, AltCLIPTextModel
>>> model = AltCLIPTextModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
>>> texts = ["it's a cat", "it's a dog"]
>>> inputs = processor(text=texts, padding=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output
AltCLIPVisionModel
class transformers.AltCLIPVisionModel
< source >
( config: AltCLIPVisionConfig )
forward
< source >
( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.altclip.configuration_altclip.AltCLIPVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AltCLIPVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, AltCLIPVisionModel
>>> model = AltCLIPVisionModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output |
https://huggingface.co/docs/transformers/model_doc/bart | BART
DISCLAIMER: If you see something strange, file a Github Issue and assign @patrickvonplaten
Overview
The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019.
According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
Tips:
BART is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder:
mask random tokens (like in BERT)
delete random tokens
mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token)
permute sentences
rotate the document to make it start at a specific token
This model was contributed by sshleifer. The Authors’ code can be found here.
Examples
Examples and scripts for fine-tuning BART and other models for sequence to sequence tasks can be found in examples/pytorch/summarization/.
An example of how to train BartForConditionalGeneration with a Hugging Face datasets object can be found in this forum discussion.
Distilled checkpoints are described in this paper.
Implementation Notes
Bart doesn’t use token_type_ids for sequence classification. Use BartTokenizer or encode() to get the proper splitting.
The forward pass of BartModel will create the decoder_input_ids if they are not passed. This is different than some other modeling APIs. A typical use case of this feature is mask filling.
Model predictions are intended to be identical to the original implementation when forced_bos_token_id=0. This only works, however, if the string you pass to fairseq.encode starts with a space.
generate() should be used for conditional generation tasks like summarization, see the example in that docstrings.
Models that load the facebook/bart-large-cnn weights will not have a mask_token_id, or be able to perform mask-filling tasks.
Mask Filling
The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks.
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
]
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Summarization
A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker.
A notebook on how to finetune BART for summarization with fastai using blurr. 🌎
A notebook on how to finetune BART for summarization in two languages with Trainer class. 🌎
BartForConditionalGeneration is supported by this example script and notebook.
TFBartForConditionalGeneration is supported by this example script and notebook.
FlaxBartForConditionalGeneration is supported by this example script.
Summarization chapter of the 🤗 Hugging Face course.
Summarization task guide
Fill-Mask
BartForConditionalGeneration is supported by this example script and notebook.
TFBartForConditionalGeneration is supported by this example script and notebook.
FlaxBartForConditionalGeneration is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Translation
A notebook on how to finetune mBART using Seq2SeqTrainer for Hindi to English translation. 🌎
BartForConditionalGeneration is supported by this example script and notebook.
TFBartForConditionalGeneration is supported by this example script and notebook.
Translation task guide
See also:
Text classification task guide
Question answering task guide
Causal language modeling task guide
BartConfig
class transformers.BartConfig
< source >
( vocab_size = 50265 max_position_embeddings = 1024 encoder_layers = 12 encoder_ffn_dim = 4096 encoder_attention_heads = 16 decoder_layers = 12 decoder_ffn_dim = 4096 decoder_attention_heads = 16 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 activation_function = 'gelu' d_model = 1024 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 classifier_dropout = 0.0 scale_embedding = False use_cache = True num_labels = 3 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 is_encoder_decoder = True decoder_start_token_id = 2 forced_eos_token_id = 2 **kwargs )
Parameters
vocab_size (int, optional, defaults to 50265) — Vocabulary size of the BART model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BartModel or TFBartModel.
d_model (int, optional, defaults to 1024) — Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) — Number of encoder layers.
decoder_layers (int, optional, defaults to 12) — Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) — The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
decoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details.
scale_embedding (bool, optional, defaults to False) — Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models).
num_labels (int, optional, defaults to 3) — The number of labels to use in BartForSequenceClassification.
forced_eos_token_id (int, optional, defaults to 2) — The id of the token to force as the last generated token when max_length is reached. Usually set to eos_token_id.
This is the configuration class to store the configuration of a BartModel. It is used to instantiate a BART model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BART facebook/bart-large architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import BartConfig, BartModel
>>>
>>> configuration = BartConfig()
>>>
>>> model = BartModel(configuration)
>>>
>>> configuration = model.config
BartTokenizer
class transformers.BartTokenizer
< source >
( vocab_file merges_file errors = 'replace' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' add_prefix_space = False **kwargs )
Parameters
vocab_file (str) — Path to the vocabulary file.
merges_file (str) — Path to the merges file.
errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (BART tokenizer detect beginning of words by the preceding space).
Constructs a BART tokenizer, which is smilar to the ROBERTa tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
>>> from transformers import BartTokenizer
>>> tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
>>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BART sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
BartTokenizerFast
class transformers.BartTokenizerFast
< source >
( vocab_file = None merges_file = None tokenizer_file = None errors = 'replace' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' add_prefix_space = False trim_offsets = True **kwargs )
Parameters
vocab_file (str) — Path to the vocabulary file.
merges_file (str) — Path to the merges file.
errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (BART tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) — Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” BART tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
>>> from transformers import BartTokenizerFast
>>> tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
>>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not make use of token type ids, therefore a list of zeros is returned.
BartModel
class transformers.BartModel
< source >
( config: BartConfig )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare BART Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The BartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BartModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = BartModel.from_pretrained("facebook/bart-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
BartForConditionalGeneration
class transformers.BartForConditionalGeneration
< source >
( config: BartConfig )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The BART Model with a language modeling head. Can be used for summarization. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The BartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Summarization example:
>>> from transformers import AutoTokenizer, BartForConditionalGeneration
>>> model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
>>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
>>>
>>> summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, max_length=20)
>>> tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'PG&E scheduled the blackouts in response to forecasts for high winds amid dry conditions'
Mask filling example:
>>> from transformers import AutoTokenizer, BartForConditionalGeneration
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
>>> TXT = "My friends are <mask> but they eat too many carbs."
>>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
>>> logits = model(input_ids).logits
>>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
>>> probs = logits[0, masked_index].softmax(dim=0)
>>> values, predictions = probs.topk(5)
>>> tokenizer.decode(predictions).split()
['not', 'good', 'healthy', 'great', 'very']
BartForSequenceClassification
class transformers.BartForSequenceClassification
< source >
( config: BartConfig **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The BartForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, BartForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("valhalla/bart-large-sst2")
>>> model = BartForSequenceClassification.from_pretrained("valhalla/bart-large-sst2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
'POSITIVE'
>>>
>>> num_labels = len(model.config.id2label)
>>> model = BartForSequenceClassification.from_pretrained("valhalla/bart-large-sst2", num_labels=num_labels)
>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
0.0
Example of multi-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, BartForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("valhalla/bart-large-sst2")
>>> model = BartForSequenceClassification.from_pretrained("valhalla/bart-large-sst2", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
>>>
>>> num_labels = len(model.config.id2label)
>>> model = BartForSequenceClassification.from_pretrained(
... "valhalla/bart-large-sst2", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).loss
BartForQuestionAnswering
class transformers.BartForQuestionAnswering
< source >
( config )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: Tensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The BartForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BartForQuestionAnswering
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("valhalla/bart-large-finetuned-squadv1")
>>> model = BartForQuestionAnswering.from_pretrained("valhalla/bart-large-finetuned-squadv1")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
' nice puppet'
>>>
>>> target_start_index = torch.tensor([14])
>>> target_end_index = torch.tensor([15])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
>>> round(loss.item(), 2)
0.59
BartForCausalLM
class transformers.BartForCausalLM
< source >
( config )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
BART decoder with with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
Example:
>>> from transformers import AutoTokenizer, BartForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = BartForCausalLM.from_pretrained("facebook/bart-base", add_cross_attention=False)
>>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
>>> list(logits.shape) == expected_shape
True
TFBartModel
class transformers.TFBartModel
< source >
( *args **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare BART Model outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None decoder_position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None decoder_head_mask: np.ndarray | tf.Tensor | None = None cross_attn_head_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None inputs_embeds: np.ndarray | tf.Tensor | None = None decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False **kwargs ) → transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBartModel
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
>>> model = TFBartModel.from_pretrained("facebook/bart-large")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)
>>> last_hidden_states = outputs.last_hidden_state
TFBartForConditionalGeneration
class transformers.TFBartForConditionalGeneration
< source >
( *args **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The BART Model with a language modeling head. Can be used for summarization. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None decoder_position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None decoder_head_mask: np.ndarray | tf.Tensor | None = None cross_attn_head_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: Optional[TFBaseModelOutput] = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None inputs_embeds: np.ndarray | tf.Tensor | None = None decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Summarization example:
>>> from transformers import AutoTokenizer, TFBartForConditionalGeneration
>>> model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-large")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
>>> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="tf")
>>>
>>> summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=5)
>>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
>>> from transformers import AutoTokenizer, TFBartForConditionalGeneration
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
>>> TXT = "My friends are <mask> but they eat too many carbs."
>>> model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-large")
>>> input_ids = tokenizer([TXT], return_tensors="tf")["input_ids"]
>>> logits = model(input_ids).logits
>>> probs = tf.nn.softmax(logits[0])
>>>
TFBartForSequenceClassification
class transformers.TFBartForSequenceClassification
< source >
( *args **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None decoder_position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None decoder_head_mask: np.ndarray | tf.Tensor | None = None cross_attn_head_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: Optional[TFBaseModelOutput] = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None inputs_embeds: np.ndarray | tf.Tensor | None = None decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBartForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
FlaxBartModel
class transformers.FlaxBartModel
< source >
( config: BartConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
The bare Bart Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBartModel
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = FlaxBartModel.from_pretrained("facebook/bart-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
encode
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
decode
< source >
( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
Example:
>>> import jax.numpy as jnp
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
>>> decoder_start_token_id = model.config.decoder_start_token_id
>>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
>>> outputs = model.decode(decoder_input_ids, encoder_outputs)
>>> last_decoder_hidden_states = outputs.last_hidden_state
FlaxBartForConditionalGeneration
class transformers.FlaxBartForConditionalGeneration
< source >
( config: BartConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
The BART Model with a language modeling head. Can be used for summarization. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Summarization example:
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="np")
>>>
>>> summary_ids = model.generate(inputs["input_ids"]).sequences
>>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
>>> import jax
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
>>> TXT = "My friends are <mask> but they eat too many carbs."
>>> input_ids = tokenizer([TXT], return_tensors="jax")["input_ids"]
>>> logits = model(input_ids).logits
>>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero()[0].item()
>>> probs = jax.nn.softmax(logits[0, masked_index], axis=0)
>>> values, predictions = jax.lax.top_k(probs, k=1)
>>> tokenizer.decode(predictions).split()
encode
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
decode
< source >
( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
Example:
>>> import jax.numpy as jnp
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
>>> decoder_start_token_id = model.config.decoder_start_token_id
>>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
>>> outputs = model.decode(decoder_input_ids, encoder_outputs)
>>> logits = outputs.logits
FlaxBartForSequenceClassification
class transformers.FlaxBartForSequenceClassification
< source >
( config: BartConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = FlaxBartForSequenceClassification.from_pretrained("facebook/bart-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
encode
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
decode
< source >
( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
Example:
>>> import jax.numpy as jnp
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
>>> decoder_start_token_id = model.config.decoder_start_token_id
>>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
>>> outputs = model.decode(decoder_input_ids, encoder_outputs)
>>> last_decoder_hidden_states = outputs.last_hidden_state
FlaxBartForQuestionAnswering
class transformers.FlaxBartForQuestionAnswering
< source >
( config: BartConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = FlaxBartForQuestionAnswering.from_pretrained("facebook/bart-base")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="jax")
>>> outputs = model(**inputs)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits
encode
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
decode
< source >
( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
Example:
>>> import jax.numpy as jnp
>>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
>>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
>>> text = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer(text, max_length=1024, return_tensors="jax")
>>> encoder_outputs = model.encode(**inputs)
>>> decoder_start_token_id = model.config.decoder_start_token_id
>>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
>>> outputs = model.decode(decoder_input_ids, encoder_outputs)
>>> last_decoder_hidden_states = outputs.last_hidden_state
FlaxBartForCausalLM
class transformers.FlaxBartForCausalLM
< source >
( config: BartConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bart Decoder Model with a language modeling head on top (linear layer with weights tied to the input embeddings) e.g for autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[jax.Array] = None encoder_attention_mask: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The FlaxBartDecoderPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBartForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
>>> model = FlaxBartForCausalLM.from_pretrained("facebook/bart-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
>>> outputs = model(**inputs)
>>>
>>> next_token_logits = outputs.logits[:, -1] |
https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer | Audio Spectrogram Transformer
Overview
The Audio Spectrogram Transformer model was proposed in AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass. The Audio Spectrogram Transformer applies a Vision Transformer to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results for audio classification.
The abstract from the paper is the following:
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
Tips:
When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it’s recommended to take care of the input normalization (to make sure the input has mean of 0 and std of 0.5). ASTFeatureExtractor takes care of this. Note that it uses the AudioSet mean and std by default. You can check ast/src/get_norm_stats.py to see how the authors compute the stats for a downstream dataset.
Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the PSLA paper) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task.
Audio pectrogram Transformer architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer.
Audio Classification
A notebook illustrating inference with AST for audio classification can be found here.
ASTForAudioClassification is supported by this example script and notebook.
See also: Audio classification.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ASTConfig
class transformers.ASTConfig
< source >
( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-12 patch_size = 16 qkv_bias = True frequency_stride = 10 time_stride = 10 max_length = 1024 num_mel_bins = 128 **kwargs )
Parameters
hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
patch_size (int, optional, defaults to 16) — The size (resolution) of each patch.
qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.
frequency_stride (int, optional, defaults to 10) — Frequency stride to use when patchifying the spectrograms.
time_stride (int, optional, defaults to 10) — Temporal stride to use when patchifying the spectrograms.
max_length (int, optional, defaults to 1024) — Temporal dimension of the spectrograms.
num_mel_bins (int, optional, defaults to 128) — Frequency dimension of the spectrograms (number of Mel-frequency bins).
This is the configuration class to store the configuration of a ASTModel. It is used to instantiate an AST model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the AST MIT/ast-finetuned-audioset-10-10-0.4593 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import ASTConfig, ASTModel
>>>
>>> configuration = ASTConfig()
>>>
>>> model = ASTModel(configuration)
>>>
>>> configuration = model.config
ASTFeatureExtractor
( feature_size = 1 sampling_rate = 16000 num_mel_bins = 128 max_length = 1024 padding_value = 0.0 do_normalize = True mean = -4.2677393 std = 4.5689974 return_attention_mask = False **kwargs )
Parameters
feature_size (int, optional, defaults to 1) — The feature dimension of the extracted features.
sampling_rate (int, optional, defaults to 16000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
num_mel_bins (int, optional, defaults to 128) — Number of Mel-frequency bins.
max_length (int, optional, defaults to 1024) — Maximum length to which to pad/truncate the extracted features.
do_normalize (bool, optional, defaults to True) — Whether or not to normalize the log-Mel features using mean and std.
mean (float, optional, defaults to -4.2677393) — The mean value used to normalize the log-Mel features. Uses the AudioSet mean by default.
std (float, optional, defaults to 4.5689974) — The standard deviation value used to normalize the log-Mel features. Uses the AudioSet standard deviation by default.
return_attention_mask (bool, optional, defaults to False) — Whether or not call() should return attention_mask.
Constructs a Audio Spectrogram Transformer (AST) feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech using TorchAudio, pads/truncates them to a fixed length and normalizes them using a mean and standard deviation.
( raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]] sampling_rate: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs )
Parameters
raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) — The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep.
sampling_rate (int, optional) — The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass sampling_rate at the forward call to prevent silent errors.
return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Main method to featurize and prepare for the model one or several sequence(s).
ASTModel
class transformers.ASTModel
< source >
( config: ASTConfig )
Parameters
config (ASTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare AST Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, max_length, num_mel_bins)) — Float values mel features extracted from the raw audio waveform. Raw audio waveform can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_features, the AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a tensor of type torch.FloatTensor. See call()
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ASTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ASTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoProcessor, ASTModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> processor = AutoProcessor.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
>>> model = ASTModel.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
>>>
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 1214, 768]
ASTForAudioClassification
class transformers.ASTForAudioClassification
< source >
( config: ASTConfig )
Parameters
config (ASTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Audio Spectrogram Transformer model with an audio classification head on top (a linear layer on top of the pooled output) e.g. for datasets like AudioSet, Speech Commands v2.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, max_length, num_mel_bins)) — Float values mel features extracted from the raw audio waveform. Raw audio waveform can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_features, the AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a tensor of type torch.FloatTensor. See call()
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the audio classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ASTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The ASTForAudioClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoFeatureExtractor, ASTForAudioClassification
>>> from datasets import load_dataset
>>> import torch
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
>>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
>>>
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.argmax(logits, dim=-1).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> predicted_label
'Speech'
>>>
>>> target_label = model.config.id2label[0]
>>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
>>> loss = model(**inputs).loss
>>> round(loss.item(), 2)
0.17 |
https://huggingface.co/docs/transformers/model_doc/albert | ALBERT
Overview
The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT:
Splitting the embedding matrix into two smaller matrices.
Using repeating layers split among groups.
The abstract from the paper is the following:
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.
Tips:
ALBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it’s more logical to have H >> E. Also, the embedding matrix is large since it’s V x E (V being the vocab size). If E < H, it has less parameters.
Layers are split in groups that share parameters (to save memory). Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not.
This model was contributed by lysandre. This model jax version was contributed by kamalkraj. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
AlbertConfig
class transformers.AlbertConfig
< source >
( vocab_size = 30000 embedding_size = 128 hidden_size = 4096 num_hidden_layers = 12 num_hidden_groups = 1 num_attention_heads = 64 intermediate_size = 16384 inner_group_num = 1 hidden_act = 'gelu_new' hidden_dropout_prob = 0 attention_probs_dropout_prob = 0 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 classifier_dropout_prob = 0.1 position_embedding_type = 'absolute' pad_token_id = 0 bos_token_id = 2 eos_token_id = 3 **kwargs )
Parameters
vocab_size (int, optional, defaults to 30000) — Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling AlbertModel or TFAlbertModel.
embedding_size (int, optional, defaults to 128) — Dimensionality of vocabulary embeddings.
hidden_size (int, optional, defaults to 4096) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_hidden_groups (int, optional, defaults to 1) — Number of groups for the hidden layers, parameters in the same group are shared.
num_attention_heads (int, optional, defaults to 64) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 16384) — The dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
inner_group_num (int, optional, defaults to 1) — The number of inner repetition of attention and ffn.
hidden_act (str or Callable, optional, defaults to "gelu_new") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling AlbertModel or TFAlbertModel.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
classifier_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for attached classifiers.
position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
This is the configuration class to store the configuration of a AlbertModel or a TFAlbertModel. It is used to instantiate an ALBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ALBERT albert-xxlarge-v2 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import AlbertConfig, AlbertModel
>>>
>>> albert_xxlarge_configuration = AlbertConfig()
>>>
>>> albert_base_configuration = AlbertConfig(
... hidden_size=768,
... num_attention_heads=12,
... intermediate_size=3072,
... )
>>>
>>> model = AlbertModel(albert_xxlarge_configuration)
>>>
>>> configuration = model.config
AlbertTokenizer
class transformers.AlbertTokenizer
< source >
( vocab_file do_lower_case = True remove_space = True keep_accents = False bos_token = '[CLS]' eos_token = '[SEP]' unk_token = '<unk>' sep_token = '[SEP]' pad_token = '<pad>' cls_token = '[CLS]' mask_token = '[MASK]' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs )
Parameters
vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) — Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) — Whether or not to keep accents when tokenizing.
bos_token (str, optional, defaults to "[CLS]") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "[SEP]") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.
sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct an ALBERT tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An ALBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
< source >
( save_directory: str filename_prefix: typing.Optional[str] = None )
AlbertTokenizerFast
class transformers.AlbertTokenizerFast
< source >
( vocab_file = None tokenizer_file = None do_lower_case = True remove_space = True keep_accents = False bos_token = '[CLS]' eos_token = '[SEP]' unk_token = '<unk>' sep_token = '[SEP]' pad_token = '<pad>' cls_token = '[CLS]' mask_token = '[MASK]' **kwargs )
Parameters
vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) — Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) — Whether or not to keep accents when tokenizing.
bos_token (str, optional, defaults to "[CLS]") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "[SEP]") — The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
Construct a “fast” ALBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on Unigram. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An ALBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of ids.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
Albert specific outputs
class transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput
< source >
( loss: typing.Optional[torch.FloatTensor] = None prediction_logits: FloatTensor = None sop_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of AlbertForPreTraining.
class transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
< source >
( loss: tf.Tensor = None prediction_logits: tf.Tensor = None sop_logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None )
Parameters
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of TFAlbertForPreTraining.
AlbertModel
class transformers.AlbertModel
< source >
( config: AlbertConfig add_pooling_layer: bool = True )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, AlbertModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = AlbertModel.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
AlbertForPreTraining
class transformers.AlbertForPreTraining
< source >
( config: AlbertConfig )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with two heads on top as done during the pretraining: a masked language modeling head and a sentence order prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None sentence_order_label: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
sentence_order_label (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]. 0 indicates original order (sequence A, then sequence B), 1 indicates switched order (sequence B, then sequence A).
A transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, AlbertForPreTraining
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = AlbertForPreTraining.from_pretrained("albert-base-v2")
>>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
>>>
>>> outputs = model(input_ids)
>>> prediction_logits = outputs.prediction_logits
>>> sop_logits = outputs.sop_logits
AlbertForMaskedLM
class transformers.AlbertForMaskedLM
< source >
( config )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import AutoTokenizer, AlbertForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = AlbertForMaskedLM.from_pretrained("albert-base-v2")
>>>
>>> inputs = tokenizer("The capital of [MASK] is Paris.", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>>
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
>>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
>>> tokenizer.decode(predicted_token_id)
'france'
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(outputs.loss.item(), 2)
0.81
AlbertForSequenceClassification
class transformers.AlbertForSequenceClassification
< source >
( config: AlbertConfig )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, AlbertForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("textattack/albert-base-v2-imdb")
>>> model = AlbertForSequenceClassification.from_pretrained("textattack/albert-base-v2-imdb")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
'LABEL_1'
>>>
>>> num_labels = len(model.config.id2label)
>>> model = AlbertForSequenceClassification.from_pretrained("textattack/albert-base-v2-imdb", num_labels=num_labels)
>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
0.12
Example of multi-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, AlbertForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("textattack/albert-base-v2-imdb")
>>> model = AlbertForSequenceClassification.from_pretrained("textattack/albert-base-v2-imdb", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
>>>
>>> num_labels = len(model.config.id2label)
>>> model = AlbertForSequenceClassification.from_pretrained(
... "textattack/albert-base-v2-imdb", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).loss
AlbertForMultipleChoice
class transformers.AlbertForMultipleChoice
< source >
( config: AlbertConfig )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (see input_ids above)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, AlbertForMultipleChoice
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = AlbertForMultipleChoice.from_pretrained("albert-base-v2")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0)
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels)
>>>
>>> loss = outputs.loss
>>> logits = outputs.logits
AlbertForTokenClassification
class transformers.AlbertForTokenClassification
< source >
( config: AlbertConfig )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, AlbertForTokenClassification
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = AlbertForTokenClassification.from_pretrained("albert-base-v2")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_token_class_ids = logits.argmax(-1)
>>>
>>>
>>>
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss
AlbertForQuestionAnswering
class transformers.AlbertForQuestionAnswering
< source >
( config: AlbertConfig )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The AlbertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, AlbertForQuestionAnswering
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("twmkn9/albert-base-v2-squad2")
>>> model = AlbertForQuestionAnswering.from_pretrained("twmkn9/albert-base-v2-squad2")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'a nice puppet'
>>>
>>> target_start_index = torch.tensor([12])
>>> target_end_index = torch.tensor([13])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
>>> round(loss.item(), 2)
7.36
TFAlbertModel
class transformers.TFAlbertModel
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Albert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFAlbertModel
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = TFAlbertModel.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)
>>> last_hidden_states = outputs.last_hidden_state
TFAlbertForPreTraining
class transformers.TFAlbertForPreTraining
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with two heads on top for pretraining: a masked language modeling head and a sentence order prediction (classification) head.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None sentence_order_label: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
A transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> import tensorflow as tf
>>> from transformers import AutoTokenizer, TFAlbertForPreTraining
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = TFAlbertForPreTraining.from_pretrained("albert-base-v2")
>>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]
>>>
>>> outputs = model(input_ids)
>>> prediction_logits = outputs.prediction_logits
>>> sop_logits = outputs.sop_logits
TFAlbertForMaskedLM
class transformers.TFAlbertForMaskedLM
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> import tensorflow as tf
>>> from transformers import AutoTokenizer, TFAlbertForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = TFAlbertForMaskedLM.from_pretrained("albert-base-v2")
>>>
>>> inputs = tokenizer(f"The capital of [MASK] is Paris.", return_tensors="tf")
>>> logits = model(**inputs).logits
>>>
>>> mask_token_index = tf.where(inputs.input_ids == tokenizer.mask_token_id)[0][1]
>>> predicted_token_id = tf.math.argmax(logits[0, mask_token_index], axis=-1)
>>> tokenizer.decode(predicted_token_id)
'france'
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
>>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(float(outputs.loss), 2)
0.81
TFAlbertForSequenceClassification
class transformers.TFAlbertForSequenceClassification
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFAlbertForSequenceClassification
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("vumichien/albert-base-v2-imdb")
>>> model = TFAlbertForSequenceClassification.from_pretrained("vumichien/albert-base-v2-imdb")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> logits = model(**inputs).logits
>>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
>>> model.config.id2label[predicted_class_id]
'LABEL_1'
>>>
>>> num_labels = len(model.config.id2label)
>>> model = TFAlbertForSequenceClassification.from_pretrained("vumichien/albert-base-v2-imdb", num_labels=num_labels)
>>> labels = tf.constant(1)
>>> loss = model(**inputs, labels=labels).loss
>>> round(float(loss), 2)
0.12
TFAlbertForMultipleChoice
class transformers.TFAlbertForMultipleChoice
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFAlbertForMultipleChoice
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = TFAlbertForMultipleChoice.from_pretrained("albert-base-v2")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
>>> outputs = model(inputs)
>>>
>>> logits = outputs.logits
TFAlbertForTokenClassification
class transformers.TFAlbertForTokenClassification
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFAlbertForTokenClassification
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = TFAlbertForTokenClassification.from_pretrained("albert-base-v2")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
>>> logits = model(**inputs).logits
>>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
>>>
>>>
>>>
>>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
>>> labels = predicted_token_class_ids
>>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFAlbertForQuestionAnswering
class transformers.TFAlbertForQuestionAnswering
< source >
( *args **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFAlbertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFAlbertForQuestionAnswering
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("vumichien/albert-base-v2-squad2")
>>> model = TFAlbertForQuestionAnswering.from_pretrained("vumichien/albert-base-v2-squad2")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="tf")
>>> outputs = model(**inputs)
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'a nice puppet'
>>>
>>> target_start_index = tf.constant([12])
>>> target_end_index = tf.constant([13])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = tf.math.reduce_mean(outputs.loss)
>>> round(float(loss), 2)
7.36
FlaxAlbertModel
class transformers.FlaxAlbertModel
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
The bare Albert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertModel
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertModel.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
FlaxAlbertForPreTraining
class transformers.FlaxAlbertForPreTraining
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Albert Model with two heads on top as done during the pretraining: a masked language modeling head and a sentence order prediction (classification) head.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.albert.modeling_flax_albert.FlaxAlbertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.albert.modeling_flax_albert.FlaxAlbertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.albert.modeling_flax_albert.FlaxAlbertForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertForPreTraining
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertForPreTraining.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
>>> outputs = model(**inputs)
>>> prediction_logits = outputs.prediction_logits
>>> seq_relationship_logits = outputs.sop_logits
FlaxAlbertForMaskedLM
class transformers.FlaxAlbertForMaskedLM
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Albert Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertForMaskedLM.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxAlbertForSequenceClassification
class transformers.FlaxAlbertForSequenceClassification
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertForSequenceClassification.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxAlbertForMultipleChoice
class transformers.FlaxAlbertForMultipleChoice
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertForMultipleChoice
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertForMultipleChoice.from_pretrained("albert-base-v2")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
>>> outputs = model(**{k: v[None, :] for k, v in encoding.items()})
>>> logits = outputs.logits
FlaxAlbertForTokenClassification
class transformers.FlaxAlbertForTokenClassification
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertForTokenClassification.from_pretrained("albert-base-v2")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxAlbertForQuestionAnswering
class transformers.FlaxAlbertForQuestionAnswering
< source >
( config: AlbertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (AlbertConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxAlbertForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
>>> model = FlaxAlbertForQuestionAnswering.from_pretrained("albert-base-v2")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="jax")
>>> outputs = model(**inputs)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits |
https://huggingface.co/docs/transformers/internal/trainer_utils | Utilities for Trainer
This page lists all the utility functions used by Trainer.
Most of those are only useful if you are studying the code of the Trainer in the library.
Utilities
class transformers.EvalPrediction
< source >
( predictions: typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray]] label_ids: typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray]] inputs: typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray], NoneType] = None )
Parameters
predictions (np.ndarray) — Predictions of the model.
label_ids (np.ndarray) — Targets to be matched.
inputs (np.ndarray, optional) —
Evaluation output (always contains labels), to be used to compute metrics.
class transformers.IntervalStrategy
< source >
( value names = None module = None qualname = None type = None start = 1 )
An enumeration.
transformers.set_seed
< source >
( seed: int )
Parameters
seed (int) — The seed to set.
Helper function for reproducible behavior to set the seed in random, numpy, torch and/or tf (if installed).
transformers.torch_distributed_zero_first
< source >
( local_rank: int )
Parameters
local_rank (int) — The rank of the local process.
Decorator to make all processes in distributed training wait for each local_master to do something.
Callbacks internals
class transformers.trainer_callback.CallbackHandler
< source >
( callbacks model tokenizer optimizer lr_scheduler )
Internal class that just calls the list of callbacks in order.
Distributed Evaluation
class transformers.trainer_pt_utils.DistributedTensorGatherer
< source >
( world_size num_samples make_multiple_of = None padding_index = -100 )
Parameters
world_size (int) — The number of processes used in the distributed training.
num_samples (int) — The number of samples in our dataset.
make_multiple_of (int, optional) — If passed, the class assumes the datasets passed to each process are made to be a multiple of this argument (by adding samples).
padding_index (int, optional, defaults to -100) — The padding index to use if the arrays don’t all have the same sequence length.
A class responsible for properly gathering tensors (or nested list/tuple of tensors) on the CPU by chunks.
If our dataset has 16 samples with a batch size of 2 on 3 processes and we gather then transfer on CPU at every step, our sampler will generate the following indices:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1]
to get something of size a multiple of 3 (so that each process gets the same dataset length). Then process 0, 1 and 2 will be responsible of making predictions for the following samples:
P0: [0, 1, 2, 3, 4, 5]
P1: [6, 7, 8, 9, 10, 11]
P2: [12, 13, 14, 15, 0, 1]
The first batch treated on each process will be
P0: [0, 1]
P1: [6, 7]
P2: [12, 13]
So if we gather at the end of the first batch, we will get a tensor (nested list/tuple of tensor) corresponding to the following indices:
[0, 1, 6, 7, 12, 13]
If we directly concatenate our results without taking any precautions, the user will then get the predictions for the indices in this order at the end of the prediction loop:
[0, 1, 6, 7, 12, 13, 2, 3, 8, 9, 14, 15, 4, 5, 10, 11, 0, 1]
For some reason, that’s not going to roll their boat. This class is there to solve that problem.
Add arrays to the internal storage, Will initialize the storage to the full size at the first arrays passed so that if we’re bound to get an OOM, it happens at the beginning.
Return the properly gathered arrays and truncate to the number of samples (since the sampler added some extras to get each process a dataset of the same length).
Distributed Evaluation
class transformers.HfArgumentParser
< source >
( dataclass_types: typing.Union[DataClassType, typing.Iterable[DataClassType]] **kwargs )
This subclass of argparse.ArgumentParser uses type hints on dataclasses to generate arguments.
The class is designed to play well with the native argparse. In particular, you can add more (non-dataclass backed) arguments to the parser after initialization and you’ll get the output back after parsing as an additional namespace. Optional: To create sub argument groups use the _argument_group_name attribute in the dataclass.
parse_args_into_dataclasses
< source >
( args = None return_remaining_strings = False look_for_args_file = True args_filename = None args_file_flag = None ) → Tuple consisting of
Returns
Tuple consisting of
the dataclass instances in the same order as they were passed to the initializer.abspath
if applicable, an additional namespace for more (non-dataclass backed) arguments added to the parser after initialization.
The potential list of remaining argument strings. (same as argparse.ArgumentParser.parse_known_args)
Parse command-line args into instances of the specified dataclass types.
This relies on argparse’s ArgumentParser.parse_known_args. See the doc at: docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args
parse_dict
< source >
( args: typing.Dict[str, typing.Any] allow_extra_keys: bool = False ) → Tuple consisting of
Parameters
args (dict) — dict containing config values
allow_extra_keys (bool, optional, defaults to False) — Defaults to False. If False, will raise an exception if the dict contains keys that are not parsed.
Returns
Tuple consisting of
the dataclass instances in the same order as they were passed to the initializer.
Alternative helper method that does not use argparse at all, instead uses a dict and populating the dataclass types.
parse_json_file
< source >
( json_file: str allow_extra_keys: bool = False ) → Tuple consisting of
Parameters
json_file (str or os.PathLike) — File name of the json file to parse
allow_extra_keys (bool, optional, defaults to False) — Defaults to False. If False, will raise an exception if the json file contains keys that are not parsed.
Returns
Tuple consisting of
the dataclass instances in the same order as they were passed to the initializer.
Alternative helper method that does not use argparse at all, instead loading a json file and populating the dataclass types.
parse_yaml_file
< source >
( yaml_file: str allow_extra_keys: bool = False ) → Tuple consisting of
Parameters
yaml_file (str or os.PathLike) — File name of the yaml file to parse
allow_extra_keys (bool, optional, defaults to False) — Defaults to False. If False, will raise an exception if the json file contains keys that are not parsed.
Returns
Tuple consisting of
the dataclass instances in the same order as they were passed to the initializer.
Alternative helper method that does not use argparse at all, instead loading a yaml file and populating the dataclass types.
Debug Utilities
class transformers.debug_utils.DebugUnderflowOverflow
< source >
( model max_frames_to_save = 21 trace_batch_nums = [] abort_after_batch_num = None )
Parameters
model (nn.Module) — The model to debug.
max_frames_to_save (int, optional, defaults to 21) — How many frames back to record
trace_batch_nums(List[int], optional, defaults to []) — Which batch numbers to trace (turns detection off)
abort_after_batch_num (`int“, optional) — Whether to abort after a certain batch number has finished
This debug class helps detect and understand where the model starts getting very large or very small, and more importantly nan or inf weight and activation elements.
There are 2 working modes:
Underflow/overflow detection (default)
Specific batch absolute min/max tracing without detection
Mode 1: Underflow/overflow detection
To activate the underflow/overflow detection, initialize the object with the model :
debug_overflow = DebugUnderflowOverflow(model)
then run the training as normal and if nan or inf gets detected in at least one of the weight, input or output elements this module will throw an exception and will print max_frames_to_save frames that lead to this event, each frame reporting
the fully qualified module name plus the class name whose forward was run
the absolute min and max value of all elements for each module weights, and the inputs and output
For example, here is the header and the last few frames in detection report for google/mt5-small run in fp16
mixed precision :
Detected inf/nan during batch_number=0
Last 21 forward frames:
abs min abs max metadata [...]
encoder.block.2.layer.1.DenseReluDense.wi_0 Linear
2.17e-07 4.50e+00 weight
1.79e-06 4.65e+00 input[0]
2.68e-06 3.70e+01 output
encoder.block.2.layer.1.DenseReluDense.wi_1 Linear
8.08e-07 2.66e+01 weight
1.79e-06 4.65e+00 input[0]
1.27e-04 2.37e+02 output
encoder.block.2.layer.1.DenseReluDense.wo Linear
1.01e-06 6.44e+00 weight
0.00e+00 9.74e+03 input[0]
3.18e-04 6.27e+04 output
encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense
1.79e-06 4.65e+00 input[0]
3.18e-04 6.27e+04 output
encoder.block.2.layer.1.dropout Dropout
3.18e-04 6.27e+04 input[0]
0.00e+00 inf output
You can see here, that T5DenseGatedGeluDense.forward resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16’s top limit of 64K. In the next frame we have Dropout which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overlow.
As you can see it’s the previous frames that we need to look into when the numbers start going into very large for fp16 numbers.
The tracking is done in a forward hook, which gets invoked immediately after forward has completed.
By default the last 21 frames are printed. You can change the default to adjust for your needs. For example :
debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100)
To validate that you have set up this debugging feature correctly, and you intend to use it in a training that may take hours to complete, first run it with normal tracing enabled for one of a few batches as explained in the next section.
Mode 2. Specific batch absolute min/max tracing without detection
The second work mode is per-batch tracing with the underflow/overflow detection feature turned off.
Let’s say you want to watch the absolute min and max values for all the ingredients of each forward call of a
given batch, and only do that for batches 1 and 3. Then you instantiate this class as :
debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3])
And now full batches 1 and 3 will be traced using the same format as explained above. Batches are 0-indexed.
This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area.
Early stopping:
You can also specify the batch number after which to stop the training, with :
debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3)
This feature is mainly useful in the tracing mode, but you can use it for any mode.
Performance:
As this module measures absolute min/`max of each weight of the model on every forward it’ll slow the training down. Therefore remember to turn it off once the debugging needs have been met. |
https://huggingface.co/docs/transformers/model_doc/barthez | BARThez
Overview
The BARThez model was proposed in BARThez: a Skilled Pretrained French Sequence-to-Sequence Model by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct, 2020.
The abstract of the paper:
Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing (NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language understanding tasks. While there are some notable exceptions, most of the available models and research have been conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language (to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research that we adapted to suit BART’s perturbation schemes. Unlike already existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already pretrained multilingual BART on BARThez’s corpus, and we show that the resulting model, which we call mBARTHez, provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.
This model was contributed by moussakam. The Authors’ code can be found here.
Examples
BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check: examples/pytorch/summarization/.
BarthezTokenizer
class transformers.BarthezTokenizer
< source >
( vocab_file bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs )
Parameters
vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) — Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.
sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from CamembertTokenizer and BartTokenizer. Construct a BARThez tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BARThez sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task.
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
BarthezTokenizerFast
class transformers.BarthezTokenizerFast
< source >
( vocab_file = None tokenizer_file = None bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' **kwargs )
Parameters
vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) — Additional special tokens used by the tokenizer.
Adapted from CamembertTokenizer and BartTokenizer. Construct a “fast” BARThez tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BARThez sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. |
https://huggingface.co/docs/transformers/internal/generation_utils | Utilities for Generation
This page lists all the utility functions used by generate(), greedy_search(), contrastive_search(), sample(), beam_search(), beam_sample(), group_beam_search(), and constrained_beam_search().
Most of those are only useful if you are studying the code of the generate methods in the library.
Generate Outputs
The output of generate() is an instance of a subclass of ModelOutput. This output is a data structure containing all the information returned by generate(), but that can also be used as tuple or dictionary.
Here’s an example:
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
The generation_output object is a GreedySearchDecoderOnlyOutput, as we can see in the documentation of that class below, it means it has the following attributes:
sequences: the generated sequences of tokens
scores (optional): the prediction scores of the language modelling head, for each generation step
hidden_states (optional): the hidden states of the model, for each generation step
attentions (optional): the attention weights of the model, for each generation step
Here we have the scores since we passed along output_scores=True, but we don’t have hidden_states and attentions because we didn’t pass output_hidden_states=True or output_attentions=True.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None. Here for instance generation_output.scores are all the generated prediction scores of the language modeling head, and generation_output.attentions is None.
When using our generation_output object as a tuple, it only keeps the attributes that don’t have None values. Here, for instance, it has two elements, loss then logits, so
will return the tuple (generation_output.sequences, generation_output.scores) for instance.
When using our generation_output object as a dictionary, it only keeps the attributes that don’t have None values. Here, for instance, it has two keys that are sequences and scores.
We document here all output types.
PyTorch
class transformers.generation.GreedySearchEncoderDecoderOutput
< source >
( sequences: LongTensor = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using greedy search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.GreedySearchDecoderOnlyOutput
< source >
( sequences: LongTensor = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using greedy search.
class transformers.generation.SampleEncoderDecoderOutput
< source >
( sequences: LongTensor = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size).
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size*num_return_sequences, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_return_sequences, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_return_sequences, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_return_sequences, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.SampleDecoderOnlyOutput
< source >
( sequences: LongTensor = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size).
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (num_return_sequences*batch_size, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (num_return_sequences*batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using sampling.
class transformers.generation.BeamSearchEncoderDecoderOutput
< source >
( sequences: LongTensor = None sequences_scores: typing.Optional[torch.FloatTensor] = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None beam_indices: typing.Optional[torch.LongTensor] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (torch.FloatTensor of shape (batch_size*num_return_sequences), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size).
beam_indices (torch.LongTensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length).
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams*num_return_sequences, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using beam search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.BeamSearchDecoderOnlyOutput
< source >
( sequences: LongTensor = None sequences_scores: typing.Optional[torch.FloatTensor] = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None beam_indices: typing.Optional[torch.LongTensor] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (torch.FloatTensor of shape (batch_size*num_return_sequences), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams*num_return_sequences, config.vocab_size).
beam_indices (torch.LongTensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length).
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using beam search.
class transformers.generation.BeamSampleEncoderDecoderOutput
< source >
( sequences: LongTensor = None sequences_scores: typing.Optional[torch.FloatTensor] = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None beam_indices: typing.Optional[torch.LongTensor] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size*num_beams, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (torch.FloatTensor of shape (batch_size * num_return_sequence), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size)).
beam_indices (torch.LongTensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length).
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using beam sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.BeamSampleDecoderOnlyOutput
< source >
( sequences: LongTensor = None sequences_scores: typing.Optional[torch.FloatTensor] = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None beam_indices: typing.Optional[torch.LongTensor] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (torch.FloatTensor of shape (batch_size * num_return_sequence), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams*num_return_sequences, config.vocab_size).
beam_indices (torch.LongTensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length).
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using beam sample.
class transformers.generation.ContrastiveSearchEncoderDecoderOutput
< source >
( sequences: LongTensor = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using contrastive search.
class transformers.generation.ContrastiveSearchDecoderOnlyOutput
< source >
( sequences: LongTensor = None scores: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None )
Parameters
sequences (torch.LongTensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(torch.FloatTensor) optional, returned when output_scores=True is passed or when — config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is —
passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using contrastive search.
TensorFlow
class transformers.generation.TFGreedySearchEncoderDecoderOutput
< source >
( sequences: Tensor = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using greedy search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.TFGreedySearchDecoderOnlyOutput
< source >
( sequences: Tensor = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using greedy search.
class transformers.generation.TFSampleEncoderDecoderOutput
< source >
( sequences: Tensor = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size).
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size*num_return_sequences, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_return_sequences, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_return_sequences, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_return_sequences, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.TFSampleDecoderOnlyOutput
< source >
( sequences: Tensor = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size).
attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (num_return_sequences*batch_size, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (num_return_sequences*batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using sampling.
class transformers.generation.TFBeamSearchEncoderDecoderOutput
< source >
( sequences: Tensor = None sequences_scores: typing.Optional[tensorflow.python.framework.ops.Tensor] = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None beam_indices: typing.Optional[tensorflow.python.framework.ops.Tensor] = None encoder_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (tf.Tensor of shape (batch_size*num_return_sequences), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensorwith up tomax_new_tokenselements (one element for each generated token), with each tensor of shape(batch_size*num_beams, config.vocab_size)`.
beam_indices (tf.Tensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length).
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams*num_return_sequences, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using beam search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.TFBeamSearchDecoderOnlyOutput
< source >
( sequences: Tensor = None sequences_scores: typing.Optional[tensorflow.python.framework.ops.Tensor] = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None beam_indices: typing.Optional[tensorflow.python.framework.ops.Tensor] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (tf.Tensor of shape (batch_size*num_return_sequences), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams*num_return_sequences, config.vocab_size).
beam_indices (tf.Tensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length).
attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using beam search.
class transformers.generation.TFBeamSampleEncoderDecoderOutput
< source >
( sequences: Tensor = None sequences_scores: typing.Optional[tensorflow.python.framework.ops.Tensor] = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None beam_indices: typing.Optional[tensorflow.python.framework.ops.Tensor] = None encoder_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size*num_beams, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (tf.Tensor of shape (batch_size * num_return_sequence), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size).
beam_indices (tf.Tensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length).
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using beam sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.TFBeamSampleDecoderOnlyOutput
< source >
( sequences: Tensor = None sequences_scores: typing.Optional[tensorflow.python.framework.ops.Tensor] = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None beam_indices: typing.Optional[tensorflow.python.framework.ops.Tensor] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size*num_return_sequences, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
sequences_scores (tf.Tensor of shape (batch_size * num_return_sequence), optional, returned when output_scores=True is passed or when config.output_scores=True) — Final beam scores of the generated sequences.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams*num_return_sequences, config.vocab_size).
beam_indices (tf.Tensor, optional, returned when output_scores=True is passed or when config.output_scores=True) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length).
attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using beam sample.
class transformers.generation.TFContrastiveSearchEncoderDecoderOutput
< source >
( sequences: Tensor = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_attentions: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None encoder_hidden_states: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None decoder_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None cross_attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length).
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
decoder_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
cross_attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
decoder_hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of encoder-decoder generation models using contrastive search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
class transformers.generation.TFContrastiveSearchDecoderOnlyOutput
< source >
( sequences: Tensor = None scores: typing.Optional[typing.Tuple[tensorflow.python.framework.ops.Tensor]] = None attentions: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None hidden_states: typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.ops.Tensor]]] = None )
Parameters
sequences (tf.Tensor of shape (batch_size, sequence_length)) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
scores (tuple(tf.Tensor) optional, returned when output_scores=True is passed or when config.output_scores=True) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size).
attentions (tuple(tuple(tf.Tensor)), optional, returned when output_attentions=True is passed or config.output_attentions=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length).
hidden_states (tuple(tuple(tf.Tensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size).
Base class for outputs of decoder-only generation models using contrastive search.
FLAX
class transformers.generation.FlaxSampleOutput
< source >
( sequences: Array = None )
Parameters
sequences (jnp.ndarray of shape (batch_size, max_length)) — The generated sequences.
Flax Base class for outputs of decoder-only generation models using sampling.
“Returns a new object replacing the specified fields with new values.
class transformers.generation.FlaxGreedySearchOutput
< source >
( sequences: Array = None )
Parameters
sequences (jnp.ndarray of shape (batch_size, max_length)) — The generated sequences.
Flax Base class for outputs of decoder-only generation models using greedy search.
“Returns a new object replacing the specified fields with new values.
class transformers.generation.FlaxBeamSearchOutput
< source >
( sequences: Array = None scores: Array = None )
Parameters
sequences (jnp.ndarray of shape (batch_size, max_length)) — The generated sequences.
scores (jnp.ndarray of shape (batch_size,)) — The scores (log probabilities) of the generated sequences.
Flax Base class for outputs of decoder-only generation models using greedy search.
“Returns a new object replacing the specified fields with new values.
LogitsProcessor
A LogitsProcessor can be used to modify the prediction scores of a language model head for generation.
PyTorch
class transformers.AlternatingCodebooksLogitsProcessor
< source >
( input_start_len: int semantic_vocab_size: int codebook_size: int )
Parameters
input_start_len (int) — The length of the initial input sequence.
semantic_vocab_size (int) — Vocabulary size of the semantic part, i.e number of tokens associated to the semantic vocabulary.
codebook_size (int) — Number of tokens associated to the codebook.
LogitsProcessor enforcing alternated generation between the two codebooks of Bark’s fine submodel.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor )
class transformers.ClassifierFreeGuidanceLogitsProcessor
< source >
( guidance_scale )
Parameters
guidance_scale (float) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality.
Logits processor for classifier free guidance (CFG). The scores are split over the batch dimension, where the first half correspond to the conditional logits (predicted from the input prompt) and the second half correspond to the unconditional logits (predicted from an empty or ‘null’ prompt). The processor computes a weighted average across the conditional and unconditional logits, parameterised by the guidance_scale.
See the paper for more information.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.EncoderNoRepeatNGramLogitsProcessor
< source >
( encoder_ngram_size: int encoder_input_ids: LongTensor )
Parameters
encoder_ngram_size (int) — All ngrams of size ngram_size can only occur within the encoder input ids.
encoder_input_ids (int) — The encoder_input_ids that should not be repeated within the decoder ids.
LogitsProcessor that enforces no repetition of encoder input ids n-grams for the decoder ids. See ParlAI.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.EncoderRepetitionPenaltyLogitsProcessor
< source >
( penalty: float encoder_input_ids: LongTensor )
Parameters
hallucination_penalty (float) — The parameter for hallucination penalty. 1.0 means no penalty.
encoder_input_ids (torch.LongTensor) — The encoder_input_ids that should be repeated within the decoder ids.
LogitsProcessor enforcing an exponential penalty on tokens that are not in the original input.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.EpsilonLogitsWarper
< source >
( epsilon: float filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
epsilon (float) — If set to > 0, only the most tokens with probabilities epsilon or higher are kept for generation.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
LogitsWarper that performs epsilon-sampling, i.e. restricting to tokens with prob >= epsilon. Takes the largest min_tokens_to_keep tokens if no tokens satisfy this constraint. See Truncation Sampling as Language Model Desmoothing for more information.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(0)
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")
>>>
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 0, 2, 2. 2, 2, 2, 2
>>>
>>>
>>>
>>> outputs = model.generate(**inputs, do_sample=True, epsilon_cutoff=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.EtaLogitsWarper
< source >
( epsilon: float filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
epsilon (float) — A float value in the range (0, 1). Hyperparameter used to calculate the dynamic cutoff value, eta. The suggested values from the paper ranges from 3e-4 to 4e-3 depending on the size of the model.
filter_value (float, optional, defaults to -float("Inf")) — All values that are found to be below the dynamic cutoff value, eta, are set to this float value. This parameter is useful when logits need to be modified for very low probability tokens that should be excluded from generation entirely.
min_tokens_to_keep (int, optional, defaults to 1) — Specifies the minimum number of tokens that must be kept for generation, regardless of their probabilities. For example, if min_tokens_to_keep is set to 1, at least one token will always be kept for generation, even if all tokens have probabilities below the cutoff eta.
LogitsWarper that performs eta-sampling, a technique to filter out tokens with probabilities below a dynamic cutoff value, eta, which is calculated based on a combination of the hyperparameter epsilon and the entropy of the token probabilities, i.e. eta := min(epsilon, sqrt(epsilon * e^-entropy(probabilities))). Takes the largest min_tokens_to_keep tokens if no tokens satisfy this constraint. It addresses the issue of poor quality in long samples of text generated by neural language models leading to more coherent and fluent text. See Truncation Sampling as Language Model Desmoothing for more information. Note: do_sample must be set to True for this LogitsWarper to work.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(0)
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")
>>>
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 0, 2, 2. 2, 2, 2, 2
>>>
>>>
>>>
>>> outputs = model.generate(**inputs, do_sample=True, eta_cutoff=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.ExponentialDecayLengthPenalty
< source >
( exponential_decay_length_penalty: typing.Tuple[int, float] eos_token_id: typing.Union[int, typing.List[int]] input_ids_seq_length: int )
Parameters
exponential_decay_length_penalty (tuple(int, float)) — This tuple shall consist of: (start_index, decay_factor) where start_index indicates where penalty starts and decay_factor represents the factor of exponential decay
eos_token_id (Union[int, List[int]]) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
input_ids_seq_length (int) — The length of the input sequence.
LogitsProcessor that exponentially increases the score of the eos_token_id after start_index has been reached. This allows generating shorter sequences without having a hard cutoff, allowing the eos_token to be predicted in a meaningful position.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> text = "Just wanted to let you know, I"
>>> inputs = tokenizer(text, return_tensors="pt")
>>>
>>>
>>> outputs = model.generate(**inputs, do_sample=True, temperature=0.9, max_length=30, pad_token_id=50256)
>>> print(tokenizer.batch_decode(outputs)[0])
Just wanted to let you know, I'm not even a lawyer. I'm a man. I have no real knowledge of politics. I'm a >>> # Generate sequences with exponential penalty, we add the exponential_decay_length_penalty=(start_index, decay_factor) >>> # We see that instead of cutting at max_tokens, the output comes to an end before (at 25 tokens) and with more meaning >>> # What happens is that starting from `start_index` the EOS token score will be increased by decay_factor exponentially >>> outputs = model.generate( ... **inputs, ... do_sample=True, ... temperature=0.9, ... max_length=30, ... pad_token_id=50256, ... exponential_decay_length_penalty=(15, 1.6), ... ) >>> print(tokenizer.batch_decode(outputs)[0]) Just wanted to let you know, I've got a very cool t-shirt educating people on how to use the Internet<|endoftext|>
>>>
>>> outputs = model.generate(
... **inputs,
... do_sample=True,
... temperature=0.9,
... max_length=30,
... pad_token_id=50256,
... exponential_decay_length_penalty=(15, 1.05),
... )
>>> print(tokenizer.batch_decode(outputs)[0])
Just wanted to let you know, I've been working on it for about 6 months and now it's in Alpha.<|endoftext|>
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.ForcedBOSTokenLogitsProcessor
< source >
( bos_token_id: int )
Parameters
bos_token_id (int) — The id of the token to force as the first generated token.
LogitsProcessor that enforces the specified token as the first generated token.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.ForcedEOSTokenLogitsProcessor
< source >
( max_length: int eos_token_id: typing.Union[int, typing.List[int]] )
Parameters
max_length (int) — The maximum length of the sequence to be generated.
eos_token_id (Union[int, List[int]]) — The id of the token to force as the last generated token when max_length is reached. Optionally, use a list to set multiple end-of-sequence tokens.
LogitsProcessor that enforces the specified token as the last generated token when max_length is reached.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.ForceTokensLogitsProcessor
< source >
( force_token_map: typing.List[typing.List[int]] )
This processor takes a list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. The processor will set their log probs to inf so that they are sampled at their corresponding index.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.HammingDiversityLogitsProcessor
< source >
( diversity_penalty: float num_beams: int num_beam_groups: int )
Parameters
diversity_penalty (float) — This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled. The penalty applied to a beam’s score when it generates a token that has already been chosen by another beam within the same group during the same time step. A higher diversity_penalty will enforce greater diversity among the beams, making it less likely for multiple beams to choose the same token. Conversely, a lower penalty will allow beams to more freely choose similar tokens. Adjusting this value can help strike a balance between diversity and natural likelihood.
num_beams (int) — Number of beams used for group beam search. Beam search is a method used that maintains beams (or “multiple hypotheses”) at each step, expanding each one and keeping the top-scoring sequences. A higher num_beams will explore more potential sequences. This can increase chances of finding a high-quality output but also increases computational cost.
num_beam_groups (int) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. Each group of beams will operate independently, selecting tokens without considering the choices of other groups. This division promotes diversity by ensuring that beams within different groups explore different paths. For instance, if num_beams is 6 and num_beam_groups is 2, there will be 2 groups each containing 3 beams. The choice of num_beam_groups should be made considering the desired level of output diversity and the total number of beams. See this paper for more details.
LogitsProcessor that enforces diverse beam search.
Note that this logits processor is only effective for PreTrainedModel.group_beam_search(). See Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models for more details.
Diverse beam search can be particularly useful in scenarios where a variety of different outputs is desired, rather than multiple similar sequences. It allows the model to explore different generation paths and provides a broader coverage of possible outputs.
This logits processor can be resource-intensive, especially when using large models or long sequences.
Traditional beam search often generates very similar sequences across different beams. HammingDiversityLogitsProcessor addresses this by penalizing beams that generate tokens already chosen by other beams in the same time step.
How It Works:
Grouping Beams: Beams are divided into groups. Each group selects tokens independently of the others.
Penalizing Repeated Tokens: If a beam in a group selects a token already chosen by another group in the same step, a penalty is applied to that token’s score.
Promoting Diversity: This penalty discourages beams within a group from selecting the same tokens as beams in other groups.
Benefits:
Diverse Outputs: Produces a variety of different sequences.
Exploration: Allows the model to explore different paths.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> import torch
>>>
>>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>>
>>> text = "The Solar System is a gravitationally bound system comprising the Sun and the objects that orbit it, either directly or indirectly. Of the objects that orbit the Sun directly, the largest are the eight planets, with the remainder being smaller objects, such as the five dwarf planets and small Solar System bodies. The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant interstellar molecular cloud."
>>> inputs = tokenizer("summarize: " + text, return_tensors="pt")
>>>
>>> outputs_diverse = model.generate(
... **inputs,
... num_beam_groups=2,
... diversity_penalty=10.0,
... max_length=100,
... num_beams=4,
... num_return_sequences=2,
... )
>>> summaries_diverse = tokenizer.batch_decode(outputs_diverse, skip_special_tokens=True)
>>>
>>> outputs_non_diverse = model.generate(
... **inputs,
... max_length=100,
... num_beams=4,
... num_return_sequences=2,
... )
>>> summary_non_diverse = tokenizer.batch_decode(outputs_non_diverse, skip_special_tokens=True)
>>>
>>> print(summary_non_diverse)
['the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.',
'the Solar System formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.']
>>> print(summaries_diverse)
['the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.',
'the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets. the rest of the objects are smaller objects, such as the five dwarf planets and small solar system bodies.']
__call__
< source >
( input_ids: LongTensor scores: FloatTensor current_tokens: LongTensor beam_group_idx: int ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
current_tokens (torch.LongTensor of shape (batch_size)) — Indices of input sequence tokens in the vocabulary, corresponding to the tokens selected by the other beam groups in the current generation step.
beam_group_idx (int) — The index of the beam group currently being processed.
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.InfNanRemoveLogitsProcessor
< source >
( )
LogitsProcessor that removes all nan and inf values to avoid the generation method to fail. Note that using the logits processor should only be used if necessary since it can slow down the generation method.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.LogitNormalization
< source >
( )
LogitsWarper and LogitsProcessor for normalizing the scores using log-softmax. It’s important to normalize the scores during beam search, after applying the logits processors or warpers, since the search algorithm used in this library doesn’t do it (it only does it before, but they may need re-normalization) but it still supposes that the scores are normalized when comparing the hypotheses.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
Abstract base class for all logit processors that can be applied during generation.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.LogitsProcessorList
< source >
( iterable = () )
This class can be used to create a list of LogitsProcessor or LogitsWarper to subsequently process a scores input tensor. This class inherits from list and adds a specific call method to apply each LogitsProcessor or LogitsWarper to the inputs.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor **kwargs ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
kwargs (Dict[str, Any], optional) — Additional kwargs that are specific to a logits processor.
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
Abstract base class for all logit warpers that can be applied during generation with multinomial sampling.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.MinLengthLogitsProcessor
< source >
( min_length: int eos_token_id: typing.Union[int, typing.List[int]] )
Parameters
min_length (int) — The minimum length below which the score of eos_token_id is set to -float("Inf").
eos_token_id (Union[int, List[int]]) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
LogitsProcessor enforcing a min-length by setting EOS probability to 0.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.MinNewTokensLengthLogitsProcessor
< source >
( prompt_length_to_skip: int min_new_tokens: int eos_token_id: typing.Union[int, typing.List[int]] )
Parameters
prompt_length_to_skip (int) — The input tokens length. Not a valid argument when used with generate as it will automatically assign the input length.
min_new_tokens (int) — The minimum new tokens length below which the score of eos_token_id is set to -float("Inf").
eos_token_id (Union[int, List[int]]) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
LogitsProcessor enforcing a min-length of new tokens by setting EOS (End-Of-Sequence) token probability to 0. Note that for decoder-only models, such as Llama2, min_length will compute the length of prompt + newly generated tokens whereas for other models it will behave as min_new_tokens, that is, taking only into account the newly generated ones.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> model.config.pad_token_id = model.config.eos_token_id
>>> inputs = tokenizer(["Hugging Face Company is"], return_tensors="pt")
>>>
>>> outputs = model.generate(**inputs, min_new_tokens=30)
>>> print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Hugging Face Company is a company that has been working on a new product for the past year.
>>>
>>>
>>> outputs = model.generate(**inputs, eos_token_id=1664)
>>> print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Hugging Face Company is a company
>>>
>>>
>>> outputs = model.generate(**inputs, min_new_tokens=2, eos_token_id=1664)
>>> print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Hugging Face Company is a new company
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.NoBadWordsLogitsProcessor
< source >
( bad_words_ids: typing.List[typing.List[int]] eos_token_id: typing.Union[int, typing.List[int]] )
Parameters
bad_words_ids (List[List[int]]) — List of list of token ids that are not allowed to be generated.
eos_token_id (Union[int, List[int]]) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
LogitsProcessor that enforces that specified sequences will never be selected.
In order to get the token ids of the words that should not appear in the generated text, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids. The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers. Read more here.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> inputs = tokenizer(["In a word, the cake is a"], return_tensors="pt")
>>> output_ids = model.generate(inputs["input_ids"], max_new_tokens=5, pad_token_id=tokenizer.eos_token_id)
>>> print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0])
In a word, the cake is a bit of a mess.
>>>
>>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained("gpt2", add_prefix_space=True)
>>> def get_tokens_as_list(word_list):
... "Converts a sequence of words into a list of tokens"
... tokens_list = []
... for word in word_list:
... tokenized_word = tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0]
... tokens_list.append(tokenized_word)
... return tokens_list
>>> bad_words_ids = get_tokens_as_list(word_list=["mess"])
>>> output_ids = model.generate(
... inputs["input_ids"], max_new_tokens=5, bad_words_ids=bad_words_ids, pad_token_id=tokenizer.eos_token_id
... )
>>> print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0])
In a word, the cake is a bit of a surprise.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.NoRepeatNGramLogitsProcessor
< source >
( ngram_size: int )
Parameters
ngram_size (int) — All ngrams of size ngram_size can only occur once.
N-grams are groups of “n” consecutive words, characters, or tokens taken from a sequence of text. Given the sentence: “She runs fast”, the bi-grams (n=2) would be (“she”, “runs”) and (“runs”, “fast”). In text generation, avoiding repetitions of word sequences provides a more diverse output. This LogitsProcessor enforces no repetition of n-grams by setting the scores of banned tokens to negative infinity which eliminates those tokens from consideration when further processing the scores. Fairseq.
Use n-gram penalties with care. For instance, penalizing 2-grams (bigrams) in an article about the city of New York might lead to undesirable outcomes where the city’s name appears only once in the entire text. Reference
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
>>> inputs = tokenizer(["Today I"], return_tensors="pt")
>>> output = model.generate(**inputs)
>>> print(tokenizer.decode(output[0], skip_special_tokens=True))
Today I’m not sure if I’m going to be able to do it.
>>>
>>> output = model.generate(**inputs, no_repeat_ngram_size=2)
>>> print(tokenizer.decode(output[0], skip_special_tokens=True))
Today I’m not sure if I can get a better understanding of the nature of this issue
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.PrefixConstrainedLogitsProcessor
< source >
( prefix_allowed_tokens_fn: typing.Callable[[int, torch.Tensor], typing.List[int]] num_beams: int )
Parameters
prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]]) — This function constraints the beam search to allowed tokens only at each step. This function takes 2 arguments inputs_ids and the batch ID batch_id. It has to return a list with the allowed tokens for the next generation step conditioned on the previously generated tokens inputs_ids and the batch ID batch_id.
LogitsProcessor that enforces constrained generation and is useful for prefix-conditioned constrained generation. See Autoregressive Entity Retrieval for more information.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.RepetitionPenaltyLogitsProcessor
< source >
( penalty: float )
Parameters
repetition_penalty (float) — The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.
LogitsProcessor that prevents the repetition of previous tokens through an exponential penalty. This technique shares some similarities with coverage mechanisms and other aimed at reducing repetition. During the text generation process, the probability distribution for the next token is determined using a formula that incorporates token scores based on their occurrence in the generated sequence. Tokens with higher scores are more likely to be selected. The formula can be seen in the original paper. According to the paper a penalty of around 1.2 yields a good balance between truthful generation and lack of repetition.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>>
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
>>> inputs = tokenizer(["I'm not going to"], return_tensors="pt")
>>>
>>> summary_ids = model.generate(**inputs)
>>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0])
I'm not going to be able to do that. I'm going to be able to do that
>>>
>>> penalized_ids = model.generate(**inputs, repetition_penalty=1.1)
>>> print(tokenizer.batch_decode(penalized_ids, skip_special_tokens=True)[0])
I'm not going to be able to do that. I'll just have to go out and play
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.SequenceBiasLogitsProcessor
< source >
( sequence_bias: typing.Dict[typing.Tuple[int], float] )
Parameters
sequence_bias (Dict[Tuple[int], float]) — Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. If a sequence has a length of 1, its bias will always be applied. Otherwise, the bias will only be applied if the sequence in question is about to be completed (in the token selection step after this processor is applied).
LogitsProcessor that applies an additive bias on sequences. The bias is applied to the last token of a sequence when the next generated token can complete it. Consequently, to take the most of biasing sequences with more than one token, consider using beam methods (to gracefully work around partially completed sequences that have a negative bias) and applying the bias to their prefixes (to ensure the bias is applied earlier).
In order to get the token ids of the sequences that you want to bias, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids. The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers. Read more here.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> inputs = tokenizer(["The full name of Donald is Donald"], return_tensors="pt")
>>> summary_ids = model.generate(inputs["input_ids"], max_new_tokens=4)
>>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald J. Trump Jr
>>>
>>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained("gpt2", add_prefix_space=True)
>>> def get_tokens_as_tuple(word):
... return tuple(tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0])
>>>
>>> sequence_bias = {get_tokens_as_tuple("Trump"): -10.0}
>>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, sequence_bias=sequence_bias)
>>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald J. Donald,
>>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, num_beams=4, sequence_bias=sequence_bias)
>>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald Rumsfeld,
>>>
>>> sequence_bias = {get_tokens_as_tuple("Donald Duck"): 10.0}
>>> biased_ids = model.generate(inputs["input_ids"], max_new_tokens=4, num_beams=4, sequence_bias=sequence_bias)
>>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald Duck.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.SuppressTokensAtBeginLogitsProcessor
< source >
( begin_suppress_tokens begin_index )
SuppressTokensAtBeginLogitsProcessor supresses a list of tokens as soon as the generate function starts generating using begin_index tokens. This should ensure that the tokens defined by begin_suppress_tokens at not sampled at the begining of the generation.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.SuppressTokensLogitsProcessor
< source >
( suppress_tokens )
This processor can be used to suppress a list of tokens. The processor will set their log probs to -inf so that they are not sampled.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.TemperatureLogitsWarper
< source >
( temperature: float )
Parameters
temperature (float) — Strictly positive float value used to modulate the logits distribution. A value smaller than 1 decreases randomness (and vice versa), with 0 being equivalent to shifting all probability mass to the most likely token.
LogitsWarper for temperature (exponential scaling output probability distribution), which effectively means that it can control the randomness of the predicted tokens.
Make sure that do_sample=True is included in the generate arguments otherwise the temperature value won’t have any effect.
Examples:
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(0)
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> model.config.pad_token_id = model.config.eos_token_id
>>> inputs = tokenizer(["Hugging Face Company is"], return_tensors="pt")
>>>
>>> generate_kwargs = {"max_new_tokens": 10, "do_sample": True, "temperature": 1.0, "num_return_sequences": 2}
>>> outputs = model.generate(**inputs, **generate_kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Hugging Face Company is a joint venture between GEO Group, one of',
'Hugging Face Company is not an exact science – but what we believe does']
>>>
>>> generate_kwargs["temperature"] = 0.0001
>>> outputs = model.generate(**inputs, **generate_kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Hugging Face Company is a company that has been around for over 20 years',
'Hugging Face Company is a company that has been around for over 20 years']
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.TopKLogitsWarper
< source >
( top_k: int filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_k (int) — The number of highest probability vocabulary tokens to keep for top-k-filtering.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
LogitsWarper that performs top-k, i.e. restricting to the k highest probability elements.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.TopPLogitsWarper
< source >
( top_p: float filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_p (float) — If set to < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
LogitsWarper that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed
>>> set_seed(0)
>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")
>>>
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 0, 2, 2. 2, 2, 2, 2
>>>
>>>
>>> outputs = model.generate(**inputs, do_sample=True, top_p=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.TypicalLogitsWarper
< source >
( mass: float = 0.9 filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
mass (float) — Value of typical_p between 0 and 1 inclusive, defaults to 0.9.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
LogitsWarper that performs typical decoding. See Typical Decoding for Natural Language Generation for more information.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
class transformers.UnbatchedClassifierFreeGuidanceLogitsProcessor
< source >
( guidance_scale: float model unconditional_ids: typing.Optional[torch.LongTensor] = None unconditional_attention_mask: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = True )
Parameters
guidance_scale (float) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale != 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. A value smaller than 1 has the opposite effect, while making the negative prompt provided with negative_prompt_ids (if any) act as a positive prompt.
unconditional_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary for the unconditional branch. If unset, will default to the last token of the prompt.
unconditional_attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Attention mask for unconditional_ids.
model (PreTrainedModel) — The model computing the unconditional scores. Supposedly the same as the one computing the conditional scores. Both models must use the same tokenizer.
smooth_factor (float, optional) — The interpolation weight for CFG Rescale. 1 means no rescaling, 0 reduces to the conditional scores without CFG. Turn it lower if the output degenerates.
use_cache (bool, optional) — Whether to cache key/values during the negative prompt forward pass.
Logits processor for Classifier-Free Guidance (CFG). The processors computes a weighted average across scores from prompt conditional and prompt unconditional (or negative) logits, parameterized by the guidance_scale. The unconditional scores are computed internally by prompting model with the unconditional_ids branch.
See the paper for more information.
Examples:
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> inputs = tokenizer(["Today, a dragon flew over Paris, France,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=1.5)
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Today, a dragon flew over Paris, France, killing at least 50 people and injuring more than 100'
>>>
>>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=2, negative_prompt_ids=neg_inputs["input_ids"])
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Today, a dragon flew over Paris, France, killing at least 130 people. French media reported that'
>>>
>>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=0, negative_prompt_ids=neg_inputs["input_ids"])
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"Today, a dragon flew over Paris, France, and I'm very happy to be here. I"
class transformers.WhisperTimeStampLogitsProcessor
< source >
( generate_config )
Parameters
generate_config (GenerateConfig) — The generate config used to generate the output. The following parameters are required: eos_token_id (int, optional, defaults to 50257): The id of the end-of-sequence token. no_timestamps_token_id (int, optional, defaults to 50363): The id of the "<|notimestamps|>" token. max_initial_timestamp_index (int, optional, defaults to 1): Used to set the maximum value of the initial timestamp. This is used to prevent the model from predicting timestamps that are too far in the future.
Whisper specific Processor. This processor can be used to force a list of tokens. The processor will set their log probs to inf so that they are sampled at their corresponding index.
See the paper for more information.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
Returns
torch.FloatTensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
TensorFlow
class transformers.TFForcedBOSTokenLogitsProcessor
< source >
( bos_token_id: int )
Parameters
bos_token_id (int) — The id of the token to force as the first generated token.
TFLogitsProcessor that enforces the specified token as the first generated token.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFForcedEOSTokenLogitsProcessor
< source >
( max_length: int eos_token_id: int )
Parameters
max_length (int) — The maximum length of the sequence to be generated.
eos_token_id (int) — The id of the token to force as the last generated token when max_length is reached.
TFLogitsProcessor that enforces the specified token as the last generated token when max_length is reached.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFForceTokensLogitsProcessor
< source >
( force_token_map: typing.List[typing.List[int]] )
This processor takes a list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. The processor will set their log probs to 0 and all other tokens to -inf so that they are sampled at their corresponding index.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFLogitsProcessor
< source >
( )
Abstract base class for all logit processors that can be applied during generation.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int ) → tf.Tensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (tf.Tensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search.
cur_len (int) — The current length of valid input sequence tokens. In the TF implementation, the input_ids’ sequence length is the maximum length generate can produce, and we need to know which of its tokens are valid.
kwargs (Dict[str, Any], optional) — Additional logits processor specific kwargs.
Returns
tf.Tensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
TF method for processing logits.
class transformers.TFLogitsProcessorList
< source >
( iterable = () )
This class can be used to create a list of TFLogitsProcessor to subsequently process a scores input tensor. This class inherits from list and adds a specific call method to apply each TFLogitsProcessor to the inputs.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int **kwargs ) → tf.Tensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (tf.Tensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search.
cur_len (int) — The current length of valid input sequence tokens. In the TF implementation, the input_ids’ sequence length is the maximum length generate can produce, and we need to know which of its tokens are valid.
kwargs (Dict[str, Any], optional) — Additional logits processor specific kwargs.
Returns
tf.Tensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
Abstract base class for all logit warpers that can be applied during generation with multinomial sampling.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int ) → tf.Tensor of shape (batch_size, config.vocab_size)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (tf.Tensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search.
cur_len (int) — The current length of valid input sequence tokens. In the TF implementation, the input_ids’ sequence length is the maximum length generate can produce, and we need to know which of its tokens are valid.
kwargs (Dict[str, Any], optional) — Additional logits processor specific kwargs.
Returns
tf.Tensor of shape (batch_size, config.vocab_size)
The processed prediction scores.
TF method for warping logits.
class transformers.TFMinLengthLogitsProcessor
< source >
( min_length: int eos_token_id: int )
Parameters
min_length (int) — The minimum length below which the score of eos_token_id is set to -float("Inf").
eos_token_id (int) — The id of the end-of-sequence token.
TFLogitsProcessor enforcing a min-length by setting EOS probability to 0.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFNoBadWordsLogitsProcessor
< source >
( bad_words_ids: typing.List[typing.List[int]] eos_token_id: int )
Parameters
bad_words_ids (List[List[int]]) — List of list of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids. The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers. Read more here.
eos_token_id (int) — The id of the end-of-sequence token.
TFLogitsProcessor that enforces that specified sequences will never be sampled.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFNoRepeatNGramLogitsProcessor
< source >
( ngram_size: int )
Parameters
ngram_size (int) — All ngrams of size ngram_size can only occur once.
TFLogitsProcessor that enforces no repetition of n-grams. See Fairseq.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFRepetitionPenaltyLogitsProcessor
< source >
( penalty: float )
Parameters
repetition_penalty (float) — The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.
TFLogitsProcessor enforcing an exponential penalty on repeated sequences.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFSuppressTokensAtBeginLogitsProcessor
< source >
( begin_suppress_tokens begin_index )
TFSuppressTokensAtBeginLogitsProcessor suppresses a list of tokens as soon as the generate function starts generating using begin_index tokens. This should ensure that the tokens defined by begin_suppress_tokens at not sampled at the begining of the generation.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFSuppressTokensLogitsProcessor
< source >
( suppress_tokens )
This processor can be used to suppress a list of tokens. The processor will set their log probs to -inf so that they are not sampled.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFTemperatureLogitsWarper
< source >
( temperature: float )
Parameters
temperature (float) — The value used to module the logits distribution.
TFLogitsWarper for temperature (exponential scaling output probability distribution).
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFTopKLogitsWarper
< source >
( top_k: int filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_k (int) — The number of highest probability vocabulary tokens to keep for top-k-filtering.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
TFLogitsWarper that performs top-k, i.e. restricting to the k highest probability elements.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
class transformers.TFTopPLogitsWarper
< source >
( top_p: float filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_p (float) — If set to < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
TFLogitsWarper that performs top-p, i.e. restricting to top tokens summing to <= prob_cut_off.
__call__
< source >
( input_ids: Tensor scores: Tensor cur_len: int )
FLAX
class transformers.FlaxForcedBOSTokenLogitsProcessor
< source >
( bos_token_id: int )
Parameters
bos_token_id (int) — The id of the token to force as the first generated token.
FlaxLogitsProcessor that enforces the specified token as the first generated token.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxForcedEOSTokenLogitsProcessor
< source >
( max_length: int eos_token_id: int )
Parameters
max_length (int) — The maximum length of the sequence to be generated.
eos_token_id (int) — The id of the token to force as the last generated token when max_length is reached.
FlaxLogitsProcessor that enforces the specified token as the last generated token when max_length is reached.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxForceTokensLogitsProcessor
< source >
( force_token_map )
Parameters
force_token_map (list) — Map giving token ids and indices where they will be forced to be sampled.
FlaxLogitsProcessor that takes a list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. The processor will set their log probs to 0 and all other tokens to -inf so that they are sampled at their corresponding index.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxLogitsProcessor
< source >
( )
Abstract base class for all logit processors that can be applied during generation.
__call__
< source >
( input_ids: Array scores: Array ) → jnp.ndarray of shape (batch_size, config.vocab_size)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (jnp.ndarray of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
kwargs (Dict[str, Any], optional) — Additional logits processor specific kwargs.
Returns
jnp.ndarray of shape (batch_size, config.vocab_size)
The processed prediction scores.
Flax method for processing logits.
class transformers.FlaxLogitsProcessorList
< source >
( iterable = () )
This class can be used to create a list of FlaxLogitsProcessor or FlaxLogitsWarper to subsequently process a scores input tensor. This class inherits from list and adds a specific call method to apply each FlaxLogitsProcessor or FlaxLogitsWarper to the inputs.
__call__
< source >
( input_ids: Array scores: Array cur_len: int **kwargs ) → jnp.ndarray of shape (batch_size, config.vocab_size)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (jnp.ndarray of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
kwargs (Dict[str, Any], optional) — Additional logits processor specific kwargs.
Returns
jnp.ndarray of shape (batch_size, config.vocab_size)
The processed prediction scores.
Abstract base class for all logit warpers that can be applied during generation with multinomial sampling.
__call__
< source >
( input_ids: Array scores: Array ) → jnp.ndarray of shape (batch_size, config.vocab_size)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (jnp.ndarray of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search
kwargs (Dict[str, Any], optional) — Additional logits processor specific kwargs.
Returns
jnp.ndarray of shape (batch_size, config.vocab_size)
The processed prediction scores.
Flax method for warping logits.
class transformers.FlaxMinLengthLogitsProcessor
< source >
( min_length: int eos_token_id: int )
Parameters
min_length (int) — The minimum length below which the score of eos_token_id is set to -float("Inf").
eos_token_id (int) — The id of the end-of-sequence token.
FlaxLogitsProcessor enforcing a min-length by setting EOS probability to 0.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxSuppressTokensAtBeginLogitsProcessor
< source >
( begin_suppress_tokens begin_index )
Parameters
begin_suppress_tokens (List[int]) — Tokens to not sample.
begin_index (int) — Index where the tokens are suppressed.
FlaxLogitsProcessor supressing a list of tokens as soon as the generate function starts generating using begin_index tokens. This should ensure that the tokens defined by begin_suppress_tokens are not sampled at the begining of the generation.
__call__
< source >
( input_ids scores cur_len: int )
class transformers.FlaxSuppressTokensLogitsProcessor
< source >
( suppress_tokens: list )
Parameters
suppress_tokens (list) — Tokens to not sample.
FlaxLogitsProcessor suppressing a list of tokens at each decoding step. The processor will set their log probs to be -inf so they are not sampled.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxTemperatureLogitsWarper
< source >
( temperature: float )
Parameters
temperature (float) — The value used to module the logits distribution.
FlaxLogitsWarper for temperature (exponential scaling output probability distribution).
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxTopKLogitsWarper
< source >
( top_k: int filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_k (int) — The number of highest probability vocabulary tokens to keep for top-k-filtering.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
FlaxLogitsWarper that performs top-k, i.e. restricting to the k highest probability elements.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxTopPLogitsWarper
< source >
( top_p: float filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_p (float) — If set to < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
filter_value (float, optional, defaults to -float("Inf")) — All filtered values will be set to this float value.
min_tokens_to_keep (int, optional, defaults to 1) — Minimum number of tokens that cannot be filtered.
FlaxLogitsWarper that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off.
__call__
< source >
( input_ids: Array scores: Array cur_len: int )
class transformers.FlaxWhisperTimeStampLogitsProcessor
< source >
( generate_config model_config decoder_input_length )
Parameters
generate_config (GenerateConfig) — The generate config used to generate the output. The following parameters are required: eos_token_id (int, optional, defaults to 50257): The id of the end-of-sequence token. no_timestamps_token_id (int, optional, defaults to 50363): The id of the "<|notimestamps|>" token. max_initial_timestamp_index (int, optional, defaults to 1): Used to set the maximum value of the initial timestamp. This is used to prevent the model from predicting timestamps that are too far in the future.
Whisper specific Processor. This processor can be used to force a list of tokens. The processor will set their log probs to inf so that they are sampled at their corresponding index.
StoppingCriteria
A StoppingCriteria can be used to change when to stop generation (other than EOS token). Please note that this is exclusivelly available to our PyTorch implementations.
Abstract base class for all stopping criteria that can be applied during generation.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor **kwargs )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.
kwargs (Dict[str, Any], optional) — Additional stopping criteria specific kwargs.
class transformers.StoppingCriteriaList
< source >
( iterable = () )
__call__
< source >
( input_ids: LongTensor scores: FloatTensor **kwargs )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.
kwargs (Dict[str, Any], optional) — Additional stopping criteria specific kwargs.
class transformers.MaxLengthCriteria
< source >
( max_length: int max_position_embeddings: typing.Optional[int] = None )
Parameters
max_length (int) — The maximum length that the output sequence can have in number of tokens.
max_position_embeddings (int, optional) — The maximum model length, as defined by the model’s config.max_position_embeddings attribute.
This class can be used to stop generation whenever the full generated number of tokens exceeds max_length. Keep in mind for decoder-only type of transformers, this will include the initial prompted tokens.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor **kwargs )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.
kwargs (Dict[str, Any], optional) — Additional stopping criteria specific kwargs.
class transformers.MaxTimeCriteria
< source >
( max_time: float initial_timestamp: typing.Optional[float] = None )
Parameters
max_time (float) — The maximum allowed time in seconds for the generation.
initial_time (float, optional, defaults to time.time()) — The start of the generation allowed time.
This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the time will start being counted when you initialize this function. You can override this by passing an initial_time.
__call__
< source >
( input_ids: LongTensor scores: FloatTensor **kwargs )
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
scores (torch.FloatTensor of shape (batch_size, config.vocab_size)) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.
kwargs (Dict[str, Any], optional) — Additional stopping criteria specific kwargs.
Constraints
A Constraint can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusivelly available to our PyTorch implementations.
Abstract base class for all constraints that can be applied during generation. It must define how the constraint can be satisfied.
All classes that inherit Constraint must follow the requirement that
completed = False
while not completed:
_, completed = constraint.update(constraint.advance())
will always terminate (halt).
advance
< source >
( ) → token_ids(torch.tensor)
Returns
token_ids(torch.tensor)
Must be a tensor of a list of indexable tokens, not some integer.
When called, returns the token that would take this constraint one step closer to being fulfilled.
copy
< source >
( stateful = False ) → constraint(Constraint)
Returns
constraint(Constraint)
The same constraint as the one being called from.
Creates a new instance of this constraint.
Reads in a token and returns whether it creates progress.
Returns the number of remaining steps of advance() in order to complete this constraint.
Resets the state of this constraint to its initialization. We would call this in cases where the fulfillment of a constraint is abrupted by an unwanted token.
Tests whether this constraint has been properly defined.
update
< source >
( token_id: int ) → stepped(bool)
Whether this constraint has become one step closer to being fulfuilled. completed(bool): Whether this constraint has been completely fulfilled by this token being generated. reset (bool): Whether this constraint has reset its progress by this token being generated.
Reads in a token and returns booleans that indicate the progress made by it. This function will update the state of this object unlikes does_advance(self, token_id: int).
This isn’t to test whether a certain token will advance the progress; it’s to update its state as if it has been generated. This becomes important if token_id != desired token (refer to else statement in PhrasalConstraint)
class transformers.PhrasalConstraint
< source >
( token_ids: typing.List[int] )
Parameters
token_ids (List[int]) — The id of the token that must be generated by the output.
Constraint enforcing that an ordered sequence of tokens is included in the output.
class transformers.DisjunctiveConstraint
< source >
( nested_token_ids: typing.List[typing.List[int]] )
Parameters
nested_token_ids (List[List[int]]) — a list of words, where each word is a list of ids. This constraint
is fulfilled by generating just one from the list of words. —
A special Constraint that is fulfilled by fulfilling just one of several constraints.
class transformers.ConstraintListState
< source >
( constraints: typing.List[transformers.generation.beam_constraints.Constraint] )
Parameters
constraints (List[Constraint]) — A list of Constraint objects that must be fulfilled by the beam scorer.
A class for beam scorers to track its progress through a list of constraints.
The list of tokens to generate such that we can make progress. By “list” we don’t mean the list of token that will fully fulfill a constraint.
Given constraints c_i = {t_ij | j == # of tokens}, If we’re not in the middle of progressing through a specific constraint c_i, we return:
[t_k1 for k in indices of unfulfilled constraints]
If we are in the middle of a constraint, then we return: [t_ij], where i is the index of the inprogress constraint, j is the next step for the constraint.
Though we don’t care which constraint is fulfilled first, if we are in the progress of fulfilling a constraint, that’s the only one we’ll return.
reset
< source >
( token_ids: typing.Optional[typing.List[int]] )
token_ids: the tokens generated thus far to reset the state of the progress through constraints.
BeamSearch
Abstract base class for all beam scorers that are used for beam_search() and beam_sample().
process
< source >
( input_ids: LongTensor next_scores: FloatTensor next_tokens: LongTensor next_indices: LongTensor **kwargs ) → UserDict
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_beams, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
next_scores (torch.FloatTensor of shape (batch_size, 2 * num_beams)) — Current scores of the top 2 * num_beams non-finished beam hypotheses.
next_tokens (torch.LongTensor of shape (batch_size, 2 * num_beams)) — input_ids of the tokens corresponding to the top 2 * num_beams non-finished beam hypotheses.
next_indices (torch.LongTensor of shape (batch_size, 2 * num_beams)) — Beam indices indicating to which beam hypothesis the next_tokens correspond.
pad_token_id (int, optional) — The id of the padding token.
eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
beam_indices (torch.LongTensor, optional) — Beam indices indicating to which beam hypothesis each token correspond.
group_index (int, optional) — The index of the group of beams. Used with group_beam_search().
A dictionary composed of the fields as defined above:
next_beam_scores (torch.FloatTensor of shape (batch_size * num_beams)) — Updated scores of all non-finished beams.
next_beam_tokens (torch.FloatTensor of shape (batch_size * num_beams)) — Next tokens to be added to the non-finished beam_hypotheses.
next_beam_indices (torch.FloatTensor of shape (batch_size * num_beams)) — Beam indices indicating to which beam the next tokens shall be added.
finalize
< source >
( input_ids: LongTensor next_scores: FloatTensor next_tokens: LongTensor next_indices: LongTensor max_length: int **kwargs ) → torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length)
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_beams, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
final_beam_scores (torch.FloatTensor of shape (batch_size * num_beams)) — The final scores of all non-finished beams.
final_beam_tokens (torch.FloatTensor of shape (batch_size * num_beams)) — The last tokens to be added to the non-finished beam_hypotheses.
final_beam_indices (torch.FloatTensor of shape (batch_size * num_beams)) — The beam indices indicating to which beam the final_beam_tokens shall be added.
pad_token_id (int, optional) — The id of the padding token.
eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
Returns
torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length)
The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id.
class transformers.BeamSearchScorer
< source >
( batch_size: int num_beams: int device: device length_penalty: typing.Optional[float] = 1.0 do_early_stopping: typing.Union[bool, str, NoneType] = False num_beam_hyps_to_keep: typing.Optional[int] = 1 num_beam_groups: typing.Optional[int] = 1 max_length: typing.Optional[int] = None )
Parameters
batch_size (int) — Batch Size of input_ids for which standard beam search decoding is run in parallel.
num_beams (int) — Number of beams for beam search.
device (torch.device) — Defines the device type (e.g., "cpu" or "cuda") on which this instance of BeamSearchScorer will be allocated.
length_penalty (float, optional, defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences.
do_early_stopping (bool or str, optional, defaults to False) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True, where the generation stops as soon as there are num_beams complete candidates; False, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never", where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm).
num_beam_hyps_to_keep (int, optional, defaults to 1) — The number of beam hypotheses that shall be returned upon calling ~transformer.BeamSearchScorer.finalize.
num_beam_groups (int) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. See this paper for more details.
max_length (int, optional) — The maximum length of the sequence to be generated.
BeamScorer implementing standard beam search decoding.
Adapted in part from Facebook’s XLM beam search code.
Reference for the diverse beam search algorithm and implementation Ashwin Kalyan’s DBS implementation
process
< source >
( input_ids: LongTensor next_scores: FloatTensor next_tokens: LongTensor next_indices: LongTensor pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None beam_indices: typing.Optional[torch.LongTensor] = None group_index: typing.Optional[int] = 0 )
finalize
< source >
( input_ids: LongTensor final_beam_scores: FloatTensor final_beam_tokens: LongTensor final_beam_indices: LongTensor max_length: int pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None beam_indices: typing.Optional[torch.LongTensor] = None )
class transformers.ConstrainedBeamSearchScorer
< source >
( batch_size: int num_beams: int constraints: typing.List[transformers.generation.beam_constraints.Constraint] device: device length_penalty: typing.Optional[float] = 1.0 do_early_stopping: typing.Union[bool, str, NoneType] = False num_beam_hyps_to_keep: typing.Optional[int] = 1 num_beam_groups: typing.Optional[int] = 1 max_length: typing.Optional[int] = None )
Parameters
batch_size (int) — Batch Size of input_ids for which standard beam search decoding is run in parallel.
num_beams (int) — Number of beams for beam search.
constraints (List[Constraint]) — A list of positive constraints represented as Constraint objects that must be fulfilled in the generation output. For more information, the documentation of Constraint should be read.
device (torch.device) — Defines the device type (e.g., "cpu" or "cuda") on which this instance of BeamSearchScorer will be allocated.
length_penalty (float, optional, defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences.
do_early_stopping (bool or str, optional, defaults to False) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True, where the generation stops as soon as there are num_beams complete candidates; False, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never", where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm).
num_beam_hyps_to_keep (int, optional, defaults to 1) — The number of beam hypotheses that shall be returned upon calling ~transformer.BeamSearchScorer.finalize.
num_beam_groups (int) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. See this paper for more details.
max_length (int, optional) — The maximum length of the sequence to be generated.
BeamScorer implementing constrained beam search decoding.
process
< source >
( input_ids: LongTensor next_scores: FloatTensor next_tokens: LongTensor next_indices: LongTensor scores_for_all_vocab: FloatTensor pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None beam_indices: typing.Optional[torch.LongTensor] = None ) → UserDict
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_beams, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
next_scores (torch.FloatTensor of shape (batch_size, 2 * num_beams)) — Current scores of the top 2 * num_beams non-finished beam hypotheses.
next_tokens (torch.LongTensor of shape (batch_size, 2 * num_beams)) — input_ids of the tokens corresponding to the top 2 * num_beams non-finished beam hypotheses.
next_indices (torch.LongTensor of shape (batch_size, 2 * num_beams)) — Beam indices indicating to which beam hypothesis the next_tokens correspond.
scores_for_all_vocab (torch.FloatTensor of shape (batch_size * num_beams, sequence_length)) — The scores of all tokens in the vocabulary for each of the beam hypotheses.
pad_token_id (int, optional) — The id of the padding token.
eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
beam_indices (torch.LongTensor, optional) — Beam indices indicating to which beam hypothesis each token correspond.
A dictionary composed of the fields as defined above:
next_beam_scores (torch.FloatTensor of shape (batch_size * num_beams)) — Updated scores of all non-finished beams.
next_beam_tokens (torch.FloatTensor of shape (batch_size * num_beams)) — Next tokens to be added to the non-finished beam_hypotheses.
next_beam_indices (torch.FloatTensor of shape (batch_size * num_beams)) — Beam indices indicating to which beam the next tokens shall be added.
finalize
< source >
( input_ids: LongTensor final_beam_scores: FloatTensor final_beam_tokens: LongTensor final_beam_indices: LongTensor max_length: int pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None beam_indices: typing.Optional[torch.LongTensor] = None )
Utilities
transformers.top_k_top_p_filtering
< source >
( logits: FloatTensor top_k: int = 0 top_p: float = 1.0 filter_value: float = -inf min_tokens_to_keep: int = 1 )
Parameters
top_k (int, optional, defaults to 0) — If > 0, only keep the top k tokens with highest probability (top-k filtering)
top_p (float, optional, defaults to 1.0) — If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
min_tokens_to_keep (int, optional, defaults to 1) — Minimumber of tokens we keep per batch example in the output.
Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
transformers.tf_top_k_top_p_filtering
< source >
( logits top_k = 0 top_p = 1.0 filter_value = -inf min_tokens_to_keep = 1 )
Parameters
top_k (int, optional, defaults to 0) — If > 0, only keep the top k tokens with highest probability (top-k filtering)
top_p (float, optional, defaults to 1.0) — If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
min_tokens_to_keep (int, optional, defaults to 1) — Minimumber of tokens we keep per batch example in the output.
Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
Streamers
class transformers.TextStreamer
< source >
( tokenizer: AutoTokenizer skip_prompt: bool = False **decode_kwargs )
Parameters
tokenizer (AutoTokenizer) — The tokenized used to decode the tokens.
skip_prompt (bool, optional, defaults to False) — Whether to skip the prompt to .generate() or not. Useful e.g. for chatbots.
decode_kwargs (dict, optional) — Additional keyword arguments to pass to the tokenizer’s decode method.
Simple text streamer that prints the token(s) to stdout as soon as entire words are formed.
The API for the streamer classes is still under development and may change in the future.
Examples:
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
>>> tok = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextStreamer(tok)
>>>
>>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20)
An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,
Flushes any remaining cache and prints a newline to stdout.
on_finalized_text
< source >
( text: str stream_end: bool = False )
Prints the new text to stdout. If the stream is ending, also prints a newline.
Receives tokens, decodes them, and prints them to stdout as soon as they form entire words.
class transformers.TextIteratorStreamer
< source >
( tokenizer: AutoTokenizer skip_prompt: bool = False timeout: typing.Optional[float] = None **decode_kwargs )
Parameters
tokenizer (AutoTokenizer) — The tokenized used to decode the tokens.
skip_prompt (bool, optional, defaults to False) — Whether to skip the prompt to .generate() or not. Useful e.g. for chatbots.
timeout (float, optional) — The timeout for the text queue. If None, the queue will block indefinitely. Useful to handle exceptions in .generate(), when it is called in a separate thread.
decode_kwargs (dict, optional) — Additional keyword arguments to pass to the tokenizer’s decode method.
Streamer that stores print-ready text in a queue, to be used by a downstream application as an iterator. This is useful for applications that benefit from acessing the generated text in a non-blocking way (e.g. in an interactive Gradio demo).
The API for the streamer classes is still under development and may change in the future.
Examples:
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
>>> from threading import Thread
>>> tok = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextIteratorStreamer(tok)
>>>
>>> generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20)
>>> thread = Thread(target=model.generate, kwargs=generation_kwargs)
>>> thread.start()
>>> generated_text = ""
>>> for new_text in streamer:
... generated_text += new_text
>>> generated_text
'An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,'
on_finalized_text
< source >
( text: str stream_end: bool = False )
Put the new text in the queue. If the stream is ending, also put a stop signal in the queue. |
https://huggingface.co/docs/transformers/perf_train_tpu | Transformers documentation
Training on TPUs
Join the Hugging Face community
and get access to the augmented documentation experience
Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes
Training on TPUs
Note: Most of the strategies introduced in the single GPU section (such as mixed precision training or gradient accumulation) and multi-GPU section are generic and apply to training models in general so make sure to have a look at it before diving into this section.
This document will be completed soon with information on how to train on TPUs. |
https://huggingface.co/docs/transformers/perf_train_cpu_many | Efficient Training on Multiple CPUs
When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently.
Intel® oneCCL Bindings for PyTorch
Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the oneCCL documentation and oneCCL specification.
Module oneccl_bindings_for_pytorch (torch_ccl before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now
Check more detailed information for oneccl_bind_pt.
Intel® oneCCL Bindings for PyTorch installation:
Wheel files are available for the following Python versions:
Extension Version Python 3.6 Python 3.7 Python 3.8 Python 3.9 Python 3.10
1.13.0 √ √ √ √
1.12.100 √ √ √ √
1.12.0 √ √ √ √
1.11.0 √ √ √ √
1.10.0 √ √ √ √
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
where {pytorch_version} should be your PyTorch version, for instance 1.13.0. Check more approaches for oneccl_bind_pt installation. Versions of oneCCL and PyTorch must match.
oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100
Intel® MPI library
Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit.
oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it.
for Intel® oneCCL >= 1.12.0
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
for Intel® oneCCL whose version < 1.12.0
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh
IPEX installation:
IPEX provides performance optimizations for CPU training with both Float32 and BFloat16, you could refer single CPU section.
The following “Usage in Trainer” takes mpirun in Intel® MPI library as an example.
Usage in Trainer
To enable multi CPU distributed training in the Trainer with the ccl backend, users should add **`--ddp_backend ccl`** in the command arguments.
Let’s see an example with the question-answering example
The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance.
export CCL_WORKER_COUNT=1
export MASTER_ADDR=127.0.0.1
mpirun -n 2 -genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--no_cuda \
--ddp_backend ccl \
--use_ipex
The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance.
In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument.
cat hostfile
xxx.xxx.xxx.xxx #node0 ip
xxx.xxx.xxx.xxx #node1 ip
Now, run the following command in node0 and 4DDP will be enabled in node0 and node1 with BF16 auto mixed precision:
export CCL_WORKER_COUNT=1
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
mpirun -f hostfile -n 4 -ppn 2 \
-genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
--model_name_or_path bert-large-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--no_cuda \
--ddp_backend ccl \
--use_ipex \
--bf16 |
https://huggingface.co/docs/transformers/perf_infer_gpu_one | Efficient Inference on a Single GPU
In addition to this guide, relevant information can be found as well in the guide for training on a single GPU and the guide for inference on CPUs.
Flash Attention 2
Note that this feature is experimental and might considerably change in future versions. For instance, the Flash Attention 2 API might migrate to BetterTransformer API in the near future.
Flash Attention 2 can considerably speed up transformer-based models’ training and inference speed. Flash Attention 2 has been introduced in the official Flash Attention repository by Tri Dao et al. The scientific paper on Flash Attention can be found here.
Make sure to follow the installation guide on the repository mentioned above to properly install Flash Attention 2. Once that package is installed, you can benefit from this feature.
We natively support Flash Attention 2 for the following models:
Llama
Mistral
Falcon
You can request to add Flash Attention 2 support for more models by opening an issue on GitHub, and even open a Pull Request to integrate the changes. The supported models can be used for inference and training, including training with padding tokens - which is currently not supported for BetterTransformer API below.
Flash Attention 2 can only be used when the models’ dtype is fp16 or bf16 and runs only on NVIDIA-GPU devices. Make sure to cast your model to the appropriate dtype and load them on a supported device before using that feature.
Quick usage
To enable Flash Attention 2 in your model, add use_flash_attention_2 in the from_pretrained arguments:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
use_flash_attention_2=True,
)
And use it for generation or fine-tuning.
Expected speedups
You can benefit from considerable speedups for fine-tuning and inference, especially for long sequences. However, since Flash Attention does not support computing attention scores with padding tokens under the hood, we must manually pad / unpad the attention scores for batched inference when the sequence contains padding tokens. This leads to a significant slowdown for batched generations with padding tokens.
To overcome this, one should use Flash Attention without padding tokens in the sequence for training (e.g., by packing a dataset, i.e., concatenating sequences until reaching the maximum sequence length. An example is provided here.
Below is the expected speedup you can get for a simple forward pass on tiiuae/falcon-7b with a sequence length of 4096 and various batch sizes, without padding tokens:
Below is the expected speedup you can get for a simple forward pass on meta-llama/Llama-7b-hf with a sequence length of 4096 and various batch sizes, without padding tokens:
For sequences with padding tokens (training with padding tokens or generating with padding tokens), we need to unpad / pad the input sequences to compute correctly the attention scores. For relatively small sequence length, on pure forward pass, this creates an overhead leading to a small speedup (below 30% of the input has been filled with padding tokens).
But for large sequence length you can benefit from interesting speedup for pure inference (also training)
Note that Flash Attention makes the attention computation more memory efficient, meaning you can train with much larger sequence lengths without facing CUDA OOM issues. It can lead up to memory reduction up to 20 for large sequence length. Check out the official flash attention repository for more details.
Advanced usage
You can combine this feature with many exisiting feature for model optimization. Check out few examples below:
Combining Flash Attention 2 and 8-bit models
You can combine this feature together with 8-bit quantization:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
use_flash_attention_2=True,
)
Combining Flash Attention 2 and 4-bit models
You can combine this feature together with 4-bit quantization:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
use_flash_attention_2=True,
)
Combining Flash Attention 2 and PEFT
You can combine this feature together with PEFT for training adapters using Flash Attention 2 under the hood:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
from peft import LoraConfig
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
use_flash_attention_2=True,
)
lora_config = LoraConfig(
r=8,
task_type="CAUSAL_LM"
)
model.add_adapter(lora_config)
...
BetterTransformer
BetterTransformer converts 🤗 Transformers models to use the PyTorch-native fastpath execution, which calls optimized kernels like Flash Attention under the hood.
BetterTransformer is also supported for faster inference on single and multi-GPU for text, image, and audio models.
Flash Attention can only be used for models using fp16 or bf16 dtype. Make sure to cast your model to the appropriate dtype before using BetterTransformer.
Encoder models
PyTorch-native nn.MultiHeadAttention attention fastpath, called BetterTransformer, can be used with Transformers through the integration in the 🤗 Optimum library.
PyTorch’s attention fastpath allows to speed up inference through kernel fusions and the use of nested tensors. Detailed benchmarks can be found in this blog post.
After installing the optimum package, to use Better Transformer during inference, the relevant internal modules are replaced by calling to_bettertransformer():
model = model.to_bettertransformer()
The method reverse_bettertransformer() allows to go back to the original modeling, which should be used before saving the model in order to use the canonical transformers modeling:
model = model.reverse_bettertransformer()
model.save_pretrained("saved_model")
Have a look at this blog post to learn more about what is possible to do with BetterTransformer API for encoder models.
Decoder models
For text models, especially decoder-based models (GPT, T5, Llama, etc.), the BetterTransformer API converts all attention operations to use the torch.nn.functional.scaled_dot_product_attention operator (SDPA) that is only available in PyTorch 2.0 and onwards.
To convert a model to BetterTransformer:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
model.to_bettertransformer()
SDPA can also call Flash Attention kernels under the hood. To enable Flash Attention or to check that it is available in a given setting (hardware, problem size), use torch.backends.cuda.sdp_kernel as a context manager:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16).to("cuda")
# convert the model to BetterTransformer
model.to_bettertransformer()
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
If you see a bug with a traceback saying
RuntimeError: No available kernel. Aborting execution.
try using the PyTorch nightly version, which may have a broader coverage for Flash Attention:
pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
Or make sure your model is correctly casted in float16 or bfloat16
Have a look at this detailed blogpost to read more about what is possible to do with BetterTransformer + SDPA API.
bitsandbytes integration for FP4 mixed-precision inference
You can install bitsandbytes and benefit from easy model compression on GPUs. Using FP4 quantization you can expect to reduce up to 8x the model size compared to its native full precision version. Check out below how to get started.
Note that this feature can also be used in a multi GPU setup.
Requirements
Latest bitsandbytes library pip install bitsandbytes>=0.39.0
Install latest accelerate from source pip install git+https://github.com/huggingface/accelerate.git
Install latest transformers from source pip install git+https://github.com/huggingface/transformers.git
Running FP4 models - single GPU setup - Quickstart
You can quickly run a FP4 model on a single GPU by running the following code:
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
Note that device_map is optional but setting device_map = 'auto' is prefered for inference as it will dispatch efficiently the model on the available ressources.
Running FP4 models - multi GPU setup
The way to load your mixed 4-bit model in multiple GPUs is as follows (same command as single GPU setup):
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
But you can control the GPU RAM you want to allocate on each GPU using accelerate. Use the max_memory argument as follows:
max_memory_mapping = {0: "600MB", 1: "1GB"}
model_name = "bigscience/bloom-3b"
model_4bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping
)
In this example, the first GPU will use 600MB of memory and the second 1GB.
Advanced usage
For more advanced usage of this method, please have a look at the quantization documentation page.
bitsandbytes integration for Int8 mixed-precision matrix decomposition
Note that this feature can also be used in a multi GPU setup.
From the paper LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale, we support Hugging Face integration for all models in the Hub with a few lines of code. The method reduces nn.Linear size by 2 for float16 and bfloat16 weights and by 4 for float32 weights, with close to no impact to the quality by operating on the outliers in half-precision.
Int8 mixed-precision matrix decomposition works by separating a matrix multiplication into two streams: (1) a systematic feature outlier stream matrix multiplied in fp16 (0.01%), (2) a regular stream of int8 matrix multiplication (99.9%). With this method, int8 inference with no predictive degradation is possible for very large models. For more details regarding the method, check out the paper or our blogpost about the integration.
Note, that you would require a GPU to run mixed-8bit models as the kernels have been compiled for GPUs only. Make sure that you have enough GPU memory to store the quarter (or half if your model weights are in half precision) of the model before using this feature. Below are some notes to help you use this module, or follow the demos on Google colab.
Requirements
If you have bitsandbytes<0.37.0, make sure you run on NVIDIA GPUs that support 8-bit tensor cores (Turing, Ampere or newer architectures - e.g. T4, RTX20s RTX30s, A40-A100). For bitsandbytes>=0.37.0, all GPUs should be supported.
Install the correct version of bitsandbytes by running: pip install bitsandbytes>=0.31.5
Install accelerate pip install accelerate>=0.12.0
Running mixed-Int8 models - single GPU setup
After installing the required libraries, the way to load your mixed 8-bit model is as follows:
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
For text generation, we recommend:
using the model’s generate() method instead of the pipeline() function. Although inference is possible with the pipeline() function, it is not optimized for mixed-8bit models, and will be slower than using the generate() method. Moreover, some sampling strategies are like nucleaus sampling are not supported by the pipeline() function for mixed-8bit models.
placing all inputs on the same device as the model.
Here is a simple example:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bigscience/bloom-2b5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
prompt = "Hello, my llama is cute"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(**inputs)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
Running mixed-int8 models - multi GPU setup
The way to load your mixed 8-bit model in multiple GPUs is as follows (same command as single GPU setup):
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
But you can control the GPU RAM you want to allocate on each GPU using accelerate. Use the max_memory argument as follows:
max_memory_mapping = {0: "1GB", 1: "2GB"}
model_name = "bigscience/bloom-3b"
model_8bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping
)
In this example, the first GPU will use 1GB of memory and the second 2GB.
Colab demos
With this method you can infer on models that were not possible to infer on a Google Colab before. Check out the demo for running T5-11b (42GB in fp32)! Using 8-bit quantization on Google Colab:
Or this demo for BLOOM-3B:
Advanced usage: mixing FP4 (or Int8) and BetterTransformer
You can combine the different methods described above to get the best performance for your model. For example, you can use BetterTransformer with FP4 mixed-precision inference + flash attention:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config)
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
https://huggingface.co/docs/transformers/perf_train_tpu_tf | Training on TPU with TensorFlow
If you don’t need long explanations and just want TPU code samples to get started with, check out our TPU example notebook!
What is a TPU?
A TPU is a Tensor Processing Unit. They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Google’s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels.
Because all TensorFlow models in 🤗 Transformers are Keras models, most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and we’ll make sure to flag them up when we get to them.
What kinds of TPU are available?
New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between TPU Nodes and TPU VMs.
When you use a TPU Node, you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the TPU Node style.
Using TPU Nodes can have some quite unexpected behaviour for people who aren’t used to them! In particular, because the TPU is located on a physically different system to the machine you’re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machine’s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node.
If you can fit all your data in memory as np.ndarray or tf.Tensor, then you can fit() on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage.
🤗Specific Hugging Face Tip🤗: The methods Dataset.to_tf_dataset() and its higher-level wrapper model.prepare_tf_dataset() , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a tf.data.Dataset it is not a “pure” tf.data pipeline and uses tf.numpy_function or Dataset.from_generator() to stream data from the underlying HuggingFace Dataset. This HuggingFace Dataset is backed by data that is on a local disc and which the remote TPU Node will not be able to read.
The second way to access a TPU is via a TPU VM. When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs!
This is an opinionated document, so here’s our opinion: Avoid using TPU Node if possible. It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Google’s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a “legacy” access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you have to! Check the TPU example notebook for code samples that explain this in more detail.
What sizes of TPU are available?
A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in pods that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a pod slice.
When you access a free TPU via Colab, you generally get a single v2-8 TPU.
I keep hearing about this XLA thing. What’s XLA, and how does it relate to TPUs?
XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument jit_compile=True to model.compile(). If you don’t get any errors and performance is good, that’s a great sign that you’re ready to move to TPU!
Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You don’t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to.
XLA compiled code is usually faster - so even if you’re not planning to run on TPU, adding jit_compile=True can improve your performance. Be sure to note the caveats below about XLA compatibility, though!
Tip born of painful experience: Although using jit_compile=True is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU!
How do I make my model XLA compatible?
In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that don’t work in XLA. We’ve distilled them into three core rules below:
🤗Specific HuggingFace Tip🤗: We’ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if you’re using transformers models. Don’t forget about these rules when writing your own models and loss functions, though!
XLA Rule #1: Your code cannot have “data-dependent conditionals”
What that means is that any if statement cannot depend on values inside a tf.Tensor. For example, this code block cannot be compiled with XLA!
if tf.reduce_sum(tensor) > 10:
tensor = tensor / 2.0
This might seem very restrictive at first, but most neural net code doesn’t need to do this. You can often get around this restriction by using tf.cond (see the documentation here) or by removing the conditional and finding a clever math trick with indicator variables instead, like so:
sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32)
tensor = tensor / (1.0 + sum_over_10)
This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems!
XLA Rule #2: Your code cannot have “data-dependent shapes”
What this means is that the shape of all of the tf.Tensor objects in your code cannot depend on their values. For example, the function tf.unique cannot be compiled with XLA, because it returns a tensor containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input Tensor was, and so XLA refuses to handle it!
In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use label masking, setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses boolean indexing:
label_mask = labels >= 0
masked_outputs = outputs[label_mask]
masked_labels = labels[label_mask]
loss = compute_loss(masked_outputs, masked_labels)
mean_loss = torch.mean(loss)
This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of masked_outputs and masked_labels depends on how many positions are masked - that makes it a data-dependent shape. However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes.
label_mask = tf.cast(labels >= 0, tf.float32)
loss = compute_loss(outputs, labels)
loss = loss * label_mask
mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask)
Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a tf.bool to tf.float32 and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA!
XLA Rule #3: XLA will need to recompile your model for every different input shape it sees
This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem.
How can you get around rule #3? The key is padding - if you pad all your inputs to the same length, and then use an attention_mask, you can get the same results as you’d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory!
There isn’t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to pad batches of samples up to a multiple of a number like 32 or 64 tokens. This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations!
🤗Specific HuggingFace Tip🤗: Our tokenizers and data collators have methods that can help you here. You can use padding="max_length" or padding="longest" when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a pad_to_multiple_of argument that you can use to reduce the number of unique input shapes you see!
How do I actually train my model on TPU?
Once your training is XLA-compatible and (if you’re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a TPUStrategy scope. Take a look at our TPU example notebook to see this in action!
Summary
There was a lot in here, so let’s summarize with a quick checklist you can follow when you want to get your model ready for TPU training:
Make sure your code follows the three rules of XLA
Compile your model with jit_compile=True on CPU/GPU and confirm that you can train it with XLA
Either load your dataset into memory or use a TPU-compatible dataset loading approach (see notebook)
Migrate your code either to Colab (with accelerator set to “TPU”) or a TPU VM on Google Cloud
Add TPU initializer code (see notebook)
Create your TPUStrategy and make sure dataset loading and model creation are inside the strategy.scope() (see notebook)
Don’t forget to take jit_compile=True out again when you move to TPU!
🙏🙏🙏🥺🥺🥺
Call model.fit()
You did it! |
https://huggingface.co/docs/transformers/hpo_train | Hyperparameter Search using Trainer API
🤗 Transformers provides a Trainer class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The Trainer provides API for hyperparameter search. This doc shows how to enable it in example.
Hyperparameter Search backend
Trainer supports four hyperparameter search backends currently: optuna, sigopt, raytune and wandb.
you should install them before using them as the hyperparameter search backend
pip install optuna/sigopt/wandb/ray[tune]
How to enable Hyperparameter search in example
Define the hyperparameter search space, different backends need different format.
For sigopt, see sigopt object_parameter, it’s like following:
>>> def sigopt_hp_space(trial):
... return [
... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"},
... {
... "categorical_values": ["16", "32", "64", "128"],
... "name": "per_device_train_batch_size",
... "type": "categorical",
... },
... ]
For optuna, see optuna object_parameter, it’s like following:
>>> def optuna_hp_space(trial):
... return {
... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True),
... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]),
... }
Optuna provides multi-objective HPO. You can pass direction in hyperparameter_search and define your own compute_objective to return multiple objective values. The Pareto Front (List[BestRun]) will be returned in hyperparameter_search, you should refer to the test case TrainerHyperParameterMultiObjectOptunaIntegrationTest in test_trainer. It’s like following
>>> best_trials = trainer.hyperparameter_search(
... direction=["minimize", "maximize"],
... backend="optuna",
... hp_space=optuna_hp_space,
... n_trials=20,
... compute_objective=compute_objective,
... )
For raytune, see raytune object_parameter, it’s like following:
>>> def ray_hp_space(trial):
... return {
... "learning_rate": tune.loguniform(1e-6, 1e-4),
... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]),
... }
For wandb, see wandb object_parameter, it’s like following:
>>> def wandb_hp_space(trial):
... return {
... "method": "random",
... "metric": {"name": "objective", "goal": "minimize"},
... "parameters": {
... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4},
... "per_device_train_batch_size": {"values": [16, 32, 64, 128]},
... },
... }
Define a model_init function and pass it to the Trainer, as an example:
>>> def model_init(trial):
... return AutoModelForSequenceClassification.from_pretrained(
... model_args.model_name_or_path,
... from_tf=bool(".ckpt" in model_args.model_name_or_path),
... config=config,
... cache_dir=model_args.cache_dir,
... revision=model_args.model_revision,
... use_auth_token=True if model_args.use_auth_token else None,
... )
Create a Trainer with your model_init function, training arguments, training and test datasets, and evaluation function:
>>> trainer = Trainer(
... model=None,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... tokenizer=tokenizer,
... model_init=model_init,
... data_collator=data_collator,
... )
Call hyperparameter search, get the best trial parameters, backend could be "optuna"/"sigopt"/"wandb"/"ray". direction can be"minimize" or "maximize", which indicates whether to optimize greater or lower objective.
You could define your own compute_objective function, if not defined, the default compute_objective will be called, and the sum of eval metric like f1 is returned as objective value.
>>> best_trial = trainer.hyperparameter_search(
... direction="maximize",
... backend="optuna",
... hp_space=optuna_hp_space,
... n_trials=20,
... compute_objective=compute_objective,
... )
Hyperparameter search For DDP finetune
Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zero process will generate the search trial and pass the argument to other ranks. |
https://huggingface.co/docs/transformers/perf_train_gpu_one | Methods and tools for efficient training on a single GPU
This guide demonstrates practical techniques that you can use to increase the efficiency of your model’s training by optimizing memory utilization, speeding up the training, or both. If you’d like to understand how GPU is utilized during training, please refer to the Model training anatomy conceptual guide first. This guide focuses on practical techniques.
If you have access to a machine with multiple GPUs, these approaches are still valid, plus you can leverage additional methods outlined in the multi-GPU section.
When training large models, there are two aspects that should be considered at the same time:
Data throughput/training time
Model performance
Maximizing the throughput (samples/second) leads to lower training cost. This is generally achieved by utilizing the GPU as much as possible and thus filling GPU memory to its limit. If the desired batch size exceeds the limits of the GPU memory, the memory optimization techniques, such as gradient accumulation, can help.
However, if the preferred batch size fits into memory, there’s no reason to apply memory-optimizing techniques because they can slow down the training. Just because one can use a large batch size, does not necessarily mean they should. As part of hyperparameter tuning, you should determine which batch size yields the best results and then optimize resources accordingly.
The methods and tools covered in this guide can be classified based on the effect they have on the training process:
Method/tool Improves training speed Optimizes memory utilization
Batch size choice Yes Yes
Gradient accumulation No Yes
Gradient checkpointing No Yes
Mixed precision training Yes (No)
Optimizer choice Yes Yes
Data preloading Yes No
DeepSpeed Zero No Yes
torch.compile Yes No
Note: when using mixed precision with a small model and a large batch size, there will be some memory savings but with a large model and a small batch size, the memory use will be larger.
You can combine the above methods to get a cumulative effect. These techniques are available to you whether you are training your model with Trainer or writing a pure PyTorch loop, in which case you can configure these optimizations with 🤗 Accelerate.
If these methods do not result in sufficient gains, you can explore the following options:
Look into building your own custom Docker container with efficient softare prebuilds
Consider a model that uses Mixture of Experts (MoE)
Convert your model to BetterTransformer to leverage PyTorch native attention
Finally, if all of the above is still not enough, even after switching to a server-grade GPU like A100, consider moving to a multi-GPU setup. All these approaches are still valid in a multi-GPU setup, plus you can leverage additional parallelism techniques outlined in the multi-GPU section.
Batch size choice
To achieve optimal performance, start by identifying the appropriate batch size. It is recommended to use batch sizes and input/output neuron counts that are of size 2^N. Often it’s a multiple of 8, but it can be higher depending on the hardware being used and the model’s dtype.
For reference, check out NVIDIA’s recommendation for input/output neuron counts and batch size for fully connected layers (which are involved in GEMMs (General Matrix Multiplications)).
Tensor Core Requirements define the multiplier based on the dtype and the hardware. For instance, for fp16 data type a multiple of 8 is recommended, unless it’s an A100 GPU, in which case use multiples of 64.
For parameters that are small, consider also Dimension Quantization Effects. This is where tiling happens and the right multiplier can have a significant speedup.
Gradient Accumulation
The gradient accumulation method aims to calculate gradients in smaller increments instead of computing them for the entire batch at once. This approach involves iteratively calculating gradients in smaller batches by performing forward and backward passes through the model and accumulating the gradients during the process. Once a sufficient number of gradients have been accumulated, the model’s optimization step is executed. By employing gradient accumulation, it becomes possible to increase the effective batch size beyond the limitations imposed by the GPU’s memory capacity. However, it is important to note that the additional forward and backward passes introduced by gradient accumulation can slow down the training process.
You can enable gradient accumulation by adding the gradient_accumulation_steps argument to TrainingArguments:
training_args = TrainingArguments(per_device_train_batch_size=1, gradient_accumulation_steps=4, **default_args)
In the above example, your effective batch size becomes 4.
Alternatively, use 🤗 Accelerate to gain full control over the training loop. Find the 🤗 Accelerate example further down in this guide.
While it is advised to max out GPU usage as much as possible, a high number of gradient accumulation steps can result in a more pronounced training slowdown. Consider the following example. Let’s say, the per_device_train_batch_size=4 without gradient accumulation hits the GPU’s limit. If you would like to train with batches of size 64, do not set the per_device_train_batch_size to 1 and gradient_accumulation_steps to 64. Instead, keep per_device_train_batch_size=4 and set gradient_accumulation_steps=16. This results in the same effective batch size while making better use of the available GPU resources.
For additional information, please refer to batch size and gradient accumulation benchmarks for RTX-3090 and A100.
Gradient Checkpointing
Some large models may still face memory issues even when the batch size is set to 1 and gradient accumulation is used. This is because there are other components that also require memory storage.
Saving all activations from the forward pass in order to compute the gradients during the backward pass can result in significant memory overhead. The alternative approach of discarding the activations and recalculating them when needed during the backward pass, would introduce a considerable computational overhead and slow down the training process.
Gradient checkpointing offers a compromise between these two approaches and saves strategically selected activations throughout the computational graph so only a fraction of the activations need to be re-computed for the gradients. For an in-depth explanation of gradient checkpointing, refer to this great article.
To enable gradient checkpointing in the Trainer, pass the corresponding a flag to TrainingArguments:
training_args = TrainingArguments(
per_device_train_batch_size=1, gradient_accumulation_steps=4, gradient_checkpointing=True, **default_args
)
Alternatively, use 🤗 Accelerate - find the 🤗 Accelerate example further in this guide.
While gradient checkpointing may improve memory efficiency, it slows training by approximately 20%.
Mixed precision training
Mixed precision training is a technique that aims to optimize the computational efficiency of training models by utilizing lower-precision numerical formats for certain variables. Traditionally, most models use 32-bit floating point precision (fp32 or float32) to represent and process variables. However, not all variables require this high precision level to achieve accurate results. By reducing the precision of certain variables to lower numerical formats like 16-bit floating point (fp16 or float16), we can speed up the computations. Because in this approach some computations are performed in half-precision, while some are still in full precision, the approach is called mixed precision training.
Most commonly mixed precision training is achieved by using fp16 (float16) data types, however, some GPU architectures (such as the Ampere architecture) offer bf16 and tf32 (CUDA internal data type) data types. Check out the NVIDIA Blog to learn more about the differences between these data types.
fp16
The main advantage of mixed precision training comes from saving the activations in half precision (fp16). Although the gradients are also computed in half precision they are converted back to full precision for the optimization step so no memory is saved here. While mixed precision training results in faster computations, it can also lead to more GPU memory being utilized, especially for small batch sizes. This is because the model is now present on the GPU in both 16-bit and 32-bit precision (1.5x the original model on the GPU).
To enable mixed precision training, set the fp16 flag to True:
training_args = TrainingArguments(per_device_train_batch_size=4, fp16=True, **default_args)
If you prefer to use 🤗 Accelerate, find the 🤗 Accelerate example further in this guide.
BF16
If you have access to an Ampere or newer hardware you can use bf16 for mixed precision training and evaluation. While bf16 has a worse precision than fp16, it has a much bigger dynamic range. In fp16 the biggest number you can have is 65535 and any number above that will result in an overflow. A bf16 number can be as large as 3.39e+38 (!) which is about the same as fp32 - because both have 8-bits used for the numerical range.
You can enable BF16 in the 🤗 Trainer with:
training_args = TrainingArguments(bf16=True, **default_args)
TF32
The Ampere hardware uses a magical data type called tf32. It has the same numerical range as fp32 (8-bits), but instead of 23 bits precision it has only 10 bits (same as fp16) and uses only 19 bits in total. It’s “magical” in the sense that you can use the normal fp32 training and/or inference code and by enabling tf32 support you can get up to 3x throughput improvement. All you need to do is to add the following to your code:
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
CUDA will automatically switch to using tf32 instead of fp32 where possible, assuming that the used GPU is from the Ampere series.
According to NVIDIA research, the majority of machine learning training workloads show the same perplexity and convergence with tf32 training as with fp32. If you’re already using fp16 or bf16 mixed precision it may help with the throughput as well.
You can enable this mode in the 🤗 Trainer:
TrainingArguments(tf32=True, **default_args)
tf32 can’t be accessed directly via tensor.to(dtype=torch.tf32) because it is an internal CUDA data type. You need torch>=1.7 to use tf32 data types.
For additional information on tf32 vs other precisions, please refer to the following benchmarks: RTX-3090 and A100.
Flash Attention 2
You can speedup the training throughput by using Flash Attention 2 integration in transformers. Check out the appropriate section in the single GPU section to learn more about how to load a model with Flash Attention 2 modules.
Optimizer choice
The most common optimizer used to train transformer models is Adam or AdamW (Adam with weight decay). Adam achieves good convergence by storing the rolling average of the previous gradients; however, it adds an additional memory footprint of the order of the number of model parameters. To remedy this, you can use an alternative optimizer. For example if you have NVIDIA/apex installed, adamw_apex_fused will give you the fastest training experience among all supported AdamW optimizers.
Trainer integrates a variety of optimizers that can be used out of box: adamw_hf, adamw_torch, adamw_torch_fused, adamw_apex_fused, adamw_anyprecision, adafactor, or adamw_bnb_8bit. More optimizers can be plugged in via a third-party implementation.
Let’s take a closer look at two alternatives to AdamW optimizer:
adafactor which is available in Trainer
adamw_bnb_8bit is also available in Trainer, but a third-party integration is provided below for demonstration.
For comparison, for a 3B-parameter model, like “t5-3b”:
A standard AdamW optimizer will need 24GB of GPU memory because it uses 8 bytes for each parameter (8*3 => 24GB)
Adafactor optimizer will need more than 12GB. It uses slightly more than 4 bytes for each parameter, so 4*3 and then some extra.
8bit BNB quantized optimizer will use only (2*3) 6GB if all optimizer states are quantized.
Adafactor
Adafactor doesn’t store rolling averages for each element in weight matrices. Instead, it keeps aggregated information (sums of rolling averages row- and column-wise), significantly reducing its footprint. However, compared to Adam, Adafactor may have slower convergence in certain cases.
You can switch to Adafactor by setting optim="adafactor" in TrainingArguments:
training_args = TrainingArguments(per_device_train_batch_size=4, optim="adafactor", **default_args)
Combined with other approaches (gradient accumulation, gradient checkpointing, and mixed precision training) you can notice up to 3x improvement while maintaining the throughput! However, as mentioned before, the convergence of Adafactor can be worse than Adam.
8-bit Adam
Instead of aggregating optimizer states like Adafactor, 8-bit Adam keeps the full state and quantizes it. Quantization means that it stores the state with lower precision and dequantizes it only for the optimization. This is similar to the idea behind mixed precision training.
To use adamw_bnb_8bit, you simply need to set optim="adamw_bnb_8bit" in TrainingArguments:
training_args = TrainingArguments(per_device_train_batch_size=4, optim="adamw_bnb_8bit", **default_args)
However, we can also use a third-party implementation of the 8-bit optimizer for demonstration purposes to see how that can be integrated.
First, follow the installation guide in the GitHub repo to install the bitsandbytes library that implements the 8-bit Adam optimizer.
Next you need to initialize the optimizer. This involves two steps:
First, group the model’s parameters into two groups - one where weight decay should be applied, and the other one where it should not. Usually, biases and layer norm parameters are not weight decayed.
Then do some argument housekeeping to use the same parameters as the previously used AdamW optimizer.
import bitsandbytes as bnb
from torch import nn
from transformers.trainer_pt_utils import get_parameter_names
training_args = TrainingArguments(per_device_train_batch_size=4, **default_args)
decay_parameters = get_parameter_names(model, [nn.LayerNorm])
decay_parameters = [name for name in decay_parameters if "bias" not in name]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if n in decay_parameters],
"weight_decay": training_args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if n not in decay_parameters],
"weight_decay": 0.0,
},
]
optimizer_kwargs = {
"betas": (training_args.adam_beta1, training_args.adam_beta2),
"eps": training_args.adam_epsilon,
}
optimizer_kwargs["lr"] = training_args.learning_rate
adam_bnb_optim = bnb.optim.Adam8bit(
optimizer_grouped_parameters,
betas=(training_args.adam_beta1, training_args.adam_beta2),
eps=training_args.adam_epsilon,
lr=training_args.learning_rate,
)
Finally, pass the custom optimizer as an argument to the Trainer:
trainer = Trainer(model=model, args=training_args, train_dataset=ds, optimizers=(adam_bnb_optim, None))
Combined with other approaches (gradient accumulation, gradient checkpointing, and mixed precision training), you can expect to get about a 3x memory improvement and even slightly higher throughput as using Adafactor.
multi_tensor
pytorch-nightly introduced torch.optim._multi_tensor which should significantly speed up the optimizers for situations with lots of small feature tensors. It should eventually become the default, but if you want to experiment with it sooner, take a look at this GitHub issue.
Data preloading
One of the important requirements to reach great training speed is the ability to feed the GPU at the maximum speed it can handle. By default, everything happens in the main process, and it might not be able to read the data from disk fast enough, and thus create a bottleneck, leading to GPU under-utilization. Configure the following arguments to reduce the bottleneck:
DataLoader(pin_memory=True, ...) - ensures the data gets preloaded into the pinned memory on CPU and typically leads to much faster transfers from CPU to GPU memory.
DataLoader(num_workers=4, ...) - spawn several workers to preload data faster. During training, watch the GPU utilization stats; if it’s far from 100%, experiment with increasing the number of workers. Of course, the problem could be elsewhere, so many workers won’t necessarily lead to better performance.
When using Trainer, the corresponding TrainingArguments are: dataloader_pin_memory (True by default), and dataloader_num_workers (defaults to 0).
DeepSpeed ZeRO
DeepSpeed is an open-source deep learning optimization library that is integrated with 🤗 Transformers and 🤗 Accelerate. It provides a wide range of features and optimizations designed to improve the efficiency and scalability of large-scale deep learning training.
If your model fits onto a single GPU and you have enough space to fit a small batch size, you don’t need to use DeepSpeed as it’ll only slow things down. However, if the model doesn’t fit onto a single GPU or you can’t fit a small batch, you can leverage DeepSpeed ZeRO + CPU Offload, or NVMe Offload for much larger models. In this case, you need to separately install the library, then follow one of the guides to create a configuration file and launch DeepSpeed:
For an in-depth guide on DeepSpeed integration with Trainer, review the corresponding documentation, specifically the section for a single GPU. Some adjustments are required to use DeepSpeed in a notebook; please take a look at the corresponding guide.
If you prefer to use 🤗 Accelerate, refer to 🤗 Accelerate DeepSpeed guide.
Using torch.compile
PyTorch 2.0 introduced a new compile function that doesn’t require any modification to existing PyTorch code but can optimize your code by adding a single line of code: model = torch.compile(model).
If using Trainer, you only need to pass the torch_compile option in the TrainingArguments:
training_args = TrainingArguments(torch_compile=True, **default_args)
torch.compile uses Python’s frame evaluation API to automatically create a graph from existing PyTorch programs. After capturing the graph, different backends can be deployed to lower the graph to an optimized engine. You can find more details and benchmarks in PyTorch documentation.
torch.compile has a growing list of backends, which can be found in by calling torchdynamo.list_backends(), each of which with its optional dependencies.
Choose which backend to use by specifying it via torch_compile_backend in the TrainingArguments. Some of the most commonly used backends are:
Debugging backends:
dynamo.optimize("eager") - Uses PyTorch to run the extracted GraphModule. This is quite useful in debugging TorchDynamo issues.
dynamo.optimize("aot_eager") - Uses AotAutograd with no compiler, i.e, just using PyTorch eager for the AotAutograd’s extracted forward and backward graphs. This is useful for debugging, and unlikely to give speedups.
Training & inference backends:
dynamo.optimize("inductor") - Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton kernels Read more
dynamo.optimize("nvfuser") - nvFuser with TorchScript. Read more
dynamo.optimize("aot_nvfuser") - nvFuser with AotAutograd. Read more
dynamo.optimize("aot_cudagraphs") - cudagraphs with AotAutograd. Read more
Inference-only backends:
dynamo.optimize("ofi") - Uses Torchscript optimize_for_inference. Read more
dynamo.optimize("fx2trt") - Uses Nvidia TensorRT for inference optimizations. Read more
dynamo.optimize("onnxrt") - Uses ONNXRT for inference on CPU/GPU. Read more
dynamo.optimize("ipex") - Uses IPEX for inference on CPU. Read more
For an example of using torch.compile with 🤗 Transformers, check out this blog post on fine-tuning a BERT model for Text Classification using the newest PyTorch 2.0 features
Using 🤗 Accelerate
With 🤗 Accelerate you can use the above methods while gaining full control over the training loop and can essentially write the loop in pure PyTorch with some minor modifications.
Suppose you have combined the methods in the TrainingArguments like so:
training_args = TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
fp16=True,
**default_args,
)
The full example training loop with 🤗 Accelerate is only a handful of lines of code long:
from accelerate import Accelerator
from torch.utils.data.dataloader import DataLoader
dataloader = DataLoader(ds, batch_size=training_args.per_device_train_batch_size)
if training_args.gradient_checkpointing:
model.gradient_checkpointing_enable()
accelerator = Accelerator(fp16=training_args.fp16)
model, optimizer, dataloader = accelerator.prepare(model, adam_bnb_optim, dataloader)
model.train()
for step, batch in enumerate(dataloader, start=1):
loss = model(**batch).loss
loss = loss / training_args.gradient_accumulation_steps
accelerator.backward(loss)
if step % training_args.gradient_accumulation_steps == 0:
optimizer.step()
optimizer.zero_grad()
First we wrap the dataset in a DataLoader. Then we can enable gradient checkpointing by calling the model’s gradient_checkpointing_enable() method. When we initialize the Accelerator we can specify if we want to use mixed precision training and it will take care of it for us in the prepare call. During the prepare call the dataloader will also be distributed across workers should we use multiple GPUs. We use the same 8-bit optimizer from the earlier example.
Finally, we can add the main training loop. Note that the backward call is handled by 🤗 Accelerate. We can also see how gradient accumulation works: we normalize the loss, so we get the average at the end of accumulation and once we have enough steps we run the optimization.
Implementing these optimization techniques with 🤗 Accelerate only takes a handful of lines of code and comes with the benefit of more flexibility in the training loop. For a full documentation of all features have a look at the Accelerate documentation.
Efficient Software Prebuilds
PyTorch’s pip and conda builds come prebuilt with the cuda toolkit which is enough to run PyTorch, but it is insufficient if you need to build cuda extensions.
At times, additional efforts may be required to pre-build some components. For instance, if you’re using libraries like apex that don’t come pre-compiled. In other situations figuring out how to install the right cuda toolkit system-wide can be complicated. To address these scenarios PyTorch and NVIDIA released a new version of NGC docker container which already comes with everything prebuilt. You just need to install your programs on it, and it will run out of the box.
This approach is also useful if you want to tweak the pytorch source and/or make a new customized build. To find the docker image version you want start with PyTorch release notes, choose one of the latest monthly releases. Go into the release’s notes for the desired release, check that the environment’s components are matching your needs (including NVIDIA Driver requirements!) and then at the very top of that document go to the corresponding NGC page. If for some reason you get lost, here is the index of all PyTorch NGC images.
Next follow the instructions to download and deploy the docker image.
Mixture of Experts
Some recent papers reported a 4-5x training speedup and a faster inference by integrating Mixture of Experts (MoE) into the Transformer models.
Since it has been discovered that more parameters lead to better performance, this technique allows to increase the number of parameters by an order of magnitude without increasing training costs.
In this approach every other FFN layer is replaced with a MoE Layer which consists of many experts, with a gated function that trains each expert in a balanced way depending on the input token’s position in a sequence.
(source: GLAM)
You can find exhaustive details and comparison tables in the papers listed at the end of this section.
The main drawback of this approach is that it requires staggering amounts of GPU memory - almost an order of magnitude larger than its dense equivalent. Various distillation and approaches are proposed to how to overcome the much higher memory requirements.
There is direct trade-off though, you can use just a few experts with a 2-3x smaller base model instead of dozens or hundreds experts leading to a 5x smaller model and thus increase the training speed moderately while increasing the memory requirements moderately as well.
Most related papers and implementations are built around Tensorflow/TPUs:
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
GLaM: Generalist Language Model (GLaM)
And for Pytorch DeepSpeed has built one as well: DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale, Mixture of Experts - blog posts: 1, 2 and specific deployment with large transformer-based natural language generation models: blog post, Megatron-Deepspeed branch.
Using PyTorch native attention and Flash Attention
PyTorch 2.0 released a native torch.nn.functional.scaled_dot_product_attention (SDPA), that allows using fused GPU kernels such as memory-efficient attention and flash attention.
After installing the optimum package, the relevant internal modules can be replaced to use PyTorch’s native attention with:
model = model.to_bettertransformer()
Once converted, train the model as usual.
The PyTorch-native scaled_dot_product_attention operator can only dispatch to Flash Attention if no attention_mask is provided.
By default, in training mode, the BetterTransformer integration drops the mask support and can only be used for training that does not require a padding mask for batched training. This is the case, for example, during masked language modeling or causal language modeling. BetterTransformer is not suited for fine-tuning models on tasks that require a padding mask.
Check out this blogpost to learn more about acceleration and memory-savings with SDPA. |
https://huggingface.co/docs/transformers/performance | Performance and Scalability
Training large transformer models and deploying them to production present various challenges.
During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment.
This documentation aims to assist you in overcoming these challenges and finding the optimal setting for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you’ll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference.
Use this document as your starting point to navigate further to the methods that match your scenario.
Training
Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections.
Methods and tools for efficient training on a single GPU: start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both.
Multi-GPU training section: explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism.
CPU training section: learn about mixed precision training on CPU.
Efficient Training on Multiple CPUs: learn about distributed CPU training.
Training on TPU with TensorFlow: if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA.
Custom hardware for training: find tips and tricks when building your own deep learning rig.
Hyperparameter Search using Trainer API
Inference
Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups.
Inference on a single CPU
Inference on a single GPU
Multi-GPU inference
XLA Integration for TensorFlow Models
Training and inference
Here you’ll find techniques, tips and tricks that apply whether you are training a model, or running inference with it.
Instantiating a big model
Troubleshooting performance issues
Contribute
This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don’t hesitate to open a PR or if you aren’t sure start an Issue and we can discuss the details there.
When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you). |
https://huggingface.co/docs/transformers/perf_hardware | Custom hardware for training
The hardware you use to run model training and inference can have a big effect on performance. For a deep dive into GPUs make sure to check out Tim Dettmer’s excellent blog post.
Let’s have a look at some practical advice for GPU setups.
GPU
When you train bigger models you have essentially three options:
bigger GPUs
more GPUs
more CPU and NVMe (offloaded to by DeepSpeed-Infinity)
Let’s start at the case where you have a single GPU.
Power and Cooling
If you bought an expensive high end GPU make sure you give it the correct power and sufficient cooling.
Power:
Some high end consumer GPU cards have 2 and sometimes 3 PCI-E 8-Pin power sockets. Make sure you have as many independent 12V PCI-E 8-Pin cables plugged into the card as there are sockets. Do not use the 2 splits at one end of the same cable (also known as pigtail cable). That is if you have 2 sockets on the GPU, you want 2 PCI-E 8-Pin cables going from your PSU to the card and not one that has 2 PCI-E 8-Pin connectors at the end! You won’t get the full performance out of your card otherwise.
Each PCI-E 8-Pin power cable needs to be plugged into a 12V rail on the PSU side and can supply up to 150W of power.
Some other cards may use a PCI-E 12-Pin connectors, and these can deliver up to 500-600W of power.
Low end cards may use 6-Pin connectors, which supply up to 75W of power.
Additionally you want the high-end PSU that has stable voltage. Some lower quality ones may not give the card the stable voltage it needs to function at its peak.
And of course the PSU needs to have enough unused Watts to power the card.
Cooling:
When a GPU gets overheated it will start throttling down and will not deliver full performance and it can even shutdown if it gets too hot.
It’s hard to tell the exact best temperature to strive for when a GPU is heavily loaded, but probably anything under +80C is good, but lower is better - perhaps 70-75C is an excellent range to be in. The throttling down is likely to start at around 84-90C. But other than throttling performance a prolonged very high temperature is likely to reduce the lifespan of a GPU.
Next let’s have a look at one of the most important aspects when having multiple GPUs: connectivity.
Multi-GPU Connectivity
If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time. If the GPUs are on the same physical node, you can run:
and it will tell you how the GPUs are inter-connected. On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like:
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X NV2 0-23 N/A
GPU1 NV2 X 0-23 N/A
on a different machine w/o NVLink we may see:
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X PHB 0-11 N/A
GPU1 PHB X 0-11 N/A
The report includes this legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV
So the first report NV2 tells us the GPUs are interconnected with 2 NVLinks, and the second report PHB we have a typical consumer-level PCIe+Bridge setup.
Check what type of connectivity you have on your setup. Some of these will make the communication between cards faster (e.g. NVLink), others slower (e.g. PHB).
Depending on the type of scalability solution used, the connectivity speed could have a major or a minor impact. If the GPUs need to sync rarely, as in DDP, the impact of a slower connection will be less significant. If the GPUs need to send messages to each other often, as in ZeRO-DP, then faster connectivity becomes super important to achieve faster training.
NVlink
NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia.
Each new generation provides a faster bandwidth, e.g. here is a quote from Nvidia Ampere GA102 GPU Architecture:
Third-Generation NVLink® GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. (Note that 3-Way and 4-Way SLI configurations are not supported.)
So the higher X you get in the report of NVX in the output of nvidia-smi topo -m the better. The generation will depend on your GPU architecture.
Let’s compare the execution of a gpt2 language model training over a small sample of wikitext.
The results are:
NVlink Time
Y 101s
N 131s
You can see that NVLink completes the training ~23% faster. In the second benchmark we use NCCL_P2P_DISABLE=1 to tell the GPUs not to use NVLink.
Here is the full benchmark code and outputs:
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \
--nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \
--output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 python -m torch.distributed.launch \
--nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train
--output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (NV2 in nvidia-smi topo -m) Software: pytorch-1.8-to-be + cuda-11.0 / transformers==4.3.0.dev0 |
https://huggingface.co/docs/transformers/perf_train_special | Transformers documentation
Training on Specialized Hardware
Join the Hugging Face community
and get access to the augmented documentation experience
Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes
Training on Specialized Hardware
Note: Most of the strategies introduced in the single GPU section (such as mixed precision training or gradient accumulation) and multi-GPU section are generic and apply to training models in general so make sure to have a look at it before diving into this section.
This document will be completed soon with information on how to train on specialized hardware. |
https://huggingface.co/docs/transformers/perf_train_cpu | Efficient Training on CPU
This guide focuses on training large models efficiently on CPU.
Mixed precision with IPEX
IPEX is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections.
Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision.
Check more detailed information for Auto Mixed Precision.
IPEX installation:
IPEX release is following PyTorch, to install via pip:
PyTorch Version IPEX version
1.13 1.13.0+cpu
1.12 1.12.300+cpu
1.11 1.11.200+cpu
1.10 1.10.100+cpu
pip install intel_extension_for_pytorch==<version_name> -f https:
Check more approaches for IPEX installation.
Usage in Trainer
To enable auto mixed precision with IPEX in Trainer, users should add `use_ipex`, `bf16` and `no_cuda` in training command arguments.
Take an example of the use cases on Transformers question-answering
Training with IPEX using BF16 auto mixed precision on CPU:
python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/ \
--use_ipex \
--bf16 --no_cuda
Practice example
Blog: Accelerating PyTorch Transformers with Intel Sapphire Rapids |
https://huggingface.co/docs/transformers/custom_tools | Custom Tools and Prompts
If you are not aware of what tools and agents are in the context of transformers, we recommend you read the Transformers Agents page first.
Transformers Agents is an experimental API that is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change.
Creating and using custom tools and prompts is paramount to empowering the agent and having it perform new tasks. In this guide we’ll take a look at:
How to customize the prompt
How to use custom tools
How to create custom tools
Customizing the prompt
As explained in Transformers Agents agents can run in run() and chat() mode. Both the run and chat modes underlie the same logic. The language model powering the agent is conditioned on a long prompt and completes the prompt by generating the next tokens until the stop token is reached. The only difference between the two modes is that during the chat mode the prompt is extended with previous user inputs and model generations. This allows the agent to have access to past interactions, seemingly giving the agent some kind of memory.
Structure of the prompt
Let’s take a closer look at how the prompt is structured to understand how it can be best customized. The prompt is structured broadly into four parts.
Introduction: how the agent should behave, explanation of the concept of tools.
Description of all the tools. This is defined by a <<all_tools>> token that is dynamically replaced at runtime with the tools defined/chosen by the user.
A set of examples of tasks and their solution
Current example, and request for solution.
To better understand each part, let’s look at a shortened version of how the run prompt can look like:
I will ask you to perform a task, your job is to come up with a series of simple commands in Python that will perform the task.
[...]
You can print intermediate results if it makes sense to do so.
Tools:
- document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question.
- image_captioner: This is a tool that generates a description of an image. It takes an input named `image` which should be the image to the caption and returns a text that contains the description in English.
[...]
Task: "Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French."
I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image.
Answer:
```py
translated_question = translator(question=question, src_lang="French", tgt_lang="English")
print(f"The translated question is {translated_question}.")
answer = image_qa(image=image, question=translated_question)
print(f"The answer is {answer}")
```
Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner."
I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
Answer:
```py
answer = document_qa(document, question="What is the oldest person?")
print(f"The answer is {answer}.")
image = image_generator("A banner showing " + answer)
```
[...]
Task: "Draw me a picture of rivers and lakes"
I will use the following
The introduction (the text before “Tools:”) explains precisely how the model shall behave and what it should do. This part most likely does not need to be customized as the agent shall always behave the same way.
The second part (the bullet points below “Tools”) is dynamically added upon calling run or chat. There are exactly as many bullet points as there are tools in agent.toolbox and each bullet point consists of the name and description of the tool:
- <tool.name>: <tool.description>
Let’s verify this quickly by loading the document_qa tool and printing out the name and description.
from transformers import load_tool
document_qa = load_tool("document-question-answering")
print(f"- {document_qa.name}: {document_qa.description}")
which gives:
- document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question.
We can see that the tool name is short and precise. The description includes two parts, the first explaining what the tool does and the second states what input arguments and return values are expected.
A good tool name and tool description are very important for the agent to correctly use it. Note that the only information the agent has about the tool is its name and description, so one should make sure that both are precisely written and match the style of the existing tools in the toolbox. In particular make sure the description mentions all the arguments expected by name in code-style, along with the expected type and a description of what they are.
Check the naming and description of the curated Transformers tools to better understand what name and description a tool is expected to have. You can see all tools with the Agent.toolbox property.
The third part includes a set of curated examples that show the agent exactly what code it should produce for what kind of user request. The large language models empowering the agent are extremely good at recognizing patterns in a prompt and repeating the pattern with new data. Therefore, it is very important that the examples are written in a way that maximizes the likelihood of the agent to generating correct, executable code in practice.
Let’s have a look at one example:
Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner."
I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
Answer:
```py
answer = document_qa(document, question="What is the oldest person?")
print(f"The answer is {answer}.")
image = image_generator("A banner showing " + answer)
```
The pattern the model is prompted to repeat has three parts: The task statement, the agent’s explanation of what it intends to do, and finally the generated code. Every example that is part of the prompt has this exact pattern, thus making sure that the agent will reproduce exactly the same pattern when generating new tokens.
The prompt examples are curated by the Transformers team and rigorously evaluated on a set of problem statements to ensure that the agent’s prompt is as good as possible to solve real use cases of the agent.
The final part of the prompt corresponds to:
Task: "Draw me a picture of rivers and lakes"
I will use the following
is a final and unfinished example that the agent is tasked to complete. The unfinished example is dynamically created based on the actual user input. For the above example, the user ran:
agent.run("Draw me a picture of rivers and lakes")
The user input - a.k.a the task: “Draw me a picture of rivers and lakes” is cast into the prompt template: “Task: <task> \n\n I will use the following”. This sentence makes up the final lines of the prompt the agent is conditioned on, therefore strongly influencing the agent to finish the example exactly in the same way it was previously done in the examples.
Without going into too much detail, the chat template has the same prompt structure with the examples having a slightly different style, e.g.:
[...]
=====
Human: Answer the question in the variable `question` about the image stored in the variable `image`.
Assistant: I will use the tool `image_qa` to answer the question on the input image.
```py
answer = image_qa(text=question, image=image)
print(f"The answer is {answer}")
```
Human: I tried this code, it worked but didn't give me a good result. The question is in French
Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this.
```py
translated_question = translator(question=question, src_lang="French", tgt_lang="English")
print(f"The translated question is {translated_question}.")
answer = image_qa(text=translated_question, image=image)
print(f"The answer is {answer}")
```
=====
[...]
Contrary, to the examples of the run prompt, each chat prompt example has one or more exchanges between the Human and the Assistant. Every exchange is structured similarly to the example of the run prompt. The user’s input is appended to behind Human: and the agent is prompted to first generate what needs to be done before generating code. An exchange can be based on previous exchanges, therefore allowing the user to refer to past exchanges as is done e.g. above by the user’s input of “I tried this code” refers to the previously generated code of the agent.
Upon running .chat, the user’s input or task is cast into an unfinished example of the form:
Human: <user-input>\n\nAssistant:
which the agent completes. Contrary to the run command, the chat command then appends the completed example to the prompt, thus giving the agent more context for the next chat turn.
Great now that we know how the prompt is structured, let’s see how we can customize it!
Writing good user inputs
While large language models are getting better and better at understanding users’ intentions, it helps enormously to be as precise as possible to help the agent pick the correct task. What does it mean to be as precise as possible?
The agent sees a list of tool names and their description in its prompt. The more tools are added the more difficult it becomes for the agent to choose the correct tool and it’s even more difficult to choose the correct sequences of tools to run. Let’s look at a common failure case, here we will only return the code to analyze it.
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
agent.run("Show me a tree", return_code=True)
gives:
==Explanation from the agent==
I will use the following tool: `image_segmenter` to create a segmentation mask for the image.
==Code generated by the agent==
mask = image_segmenter(image, prompt="tree")
which is probably not what we wanted. Instead, it is more likely that we want an image of a tree to be generated. To steer the agent more towards using a specific tool it can therefore be very helpful to use important keywords that are present in the tool’s name and description. Let’s have a look.
agent.toolbox["image_generator"].description
'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image.
The name and description make use of the keywords “image”, “prompt”, “create” and “generate”. Using these words will most likely work better here. Let’s refine our prompt a bit.
agent.run("Create an image of a tree", return_code=True)
gives:
==Explanation from the agent==
I will use the following tool `image_generator` to generate an image of a tree.
==Code generated by the agent==
image = image_generator(prompt="tree")
Much better! That looks more like what we want. In short, when you notice that the agent struggles to correctly map your task to the correct tools, try looking up the most pertinent keywords of the tool’s name and description and try refining your task request with it.
Customizing the tool descriptions
As we’ve seen before the agent has access to each of the tools’ names and descriptions. The base tools should have very precise names and descriptions, however, you might find that it could help to change the the description or name of a tool for your specific use case. This might become especially important when you’ve added multiple tools that are very similar or if you want to use your agent only for a certain domain, e.g. image generation and transformations.
A common problem is that the agent confuses image generation with image transformation/modification when used a lot for image generation tasks, e.g.
agent.run("Make an image of a house and a car", return_code=True)
returns
==Explanation from the agent==
I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house.
==Code generated by the agent==
house_image = image_generator(prompt="A house")
car_image = image_generator(prompt="A car")
house_car_image = image_transformer(image=car_image, prompt="A house")
which is probably not exactly what we want here. It seems like the agent has a difficult time to understand the difference between image_generator and image_transformer and often uses the two together.
We can help the agent here by changing the tool name and description of image_transformer. Let’s instead call it modifier to disassociate it a bit from “image” and “prompt”:
agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer")
agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace(
"transforms an image according to a prompt", "modifies an image"
)
Now “modify” is a strong cue to use the new image processor which should help with the above prompt. Let’s run it again.
agent.run("Make an image of a house and a car", return_code=True)
Now we’re getting:
==Explanation from the agent==
I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car.
==Code generated by the agent==
house_image = image_generator(prompt="A house")
car_image = image_generator(prompt="A car")
which is definitely closer to what we had in mind! However, we want to have both the house and car in the same image. Steering the task more toward single image generation should help:
agent.run("Create image: 'A house and car'", return_code=True)
==Explanation from the agent==
I will use the following tool: `image_generator` to generate an image.
==Code generated by the agent==
image = image_generator(prompt="A house and car")
Agents are still brittle for many use cases, especially when it comes to slightly more complex use cases like generating an image of multiple objects. Both the agent itself and the underlying prompt will be further improved in the coming months making sure that agents become more robust to a variety of user inputs.
Customizing the whole prompt
To give the user maximum flexibility, the whole prompt template as explained in above can be overwritten by the user. In this case make sure that your custom prompt includes an introduction section, a tool section, an example section, and an unfinished example section. If you want to overwrite the run prompt template, you can do as follows:
template = """ [...] """
agent = HfAgent(your_endpoint, run_prompt_template=template)
Please make sure to have the <<all_tools>> string and the <<prompt>> defined somewhere in the template so that the agent can be aware of the tools, it has available to it as well as correctly insert the user’s prompt.
Similarly, one can overwrite the chat prompt template. Note that the chat mode always uses the following format for the exchanges:
Human: <<task>>
Assistant:
Therefore it is important that the examples of the custom chat prompt template also make use of this format. You can overwrite the chat template at instantiation as follows.
template = """ [...] """
agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
Please make sure to have the <<all_tools>> string defined somewhere in the template so that the agent can be aware of the tools, it has available to it.
In both cases, you can pass a repo ID instead of the prompt template if you would like to use a template hosted by someone in the community. The default prompts live in this repo as an example.
To upload your custom prompt on a repo on the Hub and share it with the community just make sure:
to use a dataset repository
to put the prompt template for the run command in a file named run_prompt_template.txt
to put the prompt template for the chat command in a file named chat_prompt_template.txt
Using custom tools
In this section, we’ll be leveraging two existing custom tools that are specific to image generation:
We replace huggingface-tools/image-transformation, with diffusers/controlnet-canny-tool to allow for more image modifications.
We add a new tool for image upscaling to the default toolbox: diffusers/latent-upscaler-tool replace the existing image-transformation tool.
We’ll start by loading the custom tools with the convenient load_tool() function:
from transformers import load_tool
controlnet_transformer = load_tool("diffusers/controlnet-canny-tool")
upscaler = load_tool("diffusers/latent-upscaler-tool")
Upon adding custom tools to an agent, the tools’ descriptions and names are automatically included in the agents’ prompts. Thus, it is imperative that custom tools have a well-written description and name in order for the agent to understand how to use them. Let’s take a look at the description and name of controlnet_transformer:
print(f"Description: '{controlnet_transformer.description}'")
print(f"Name: '{controlnet_transformer.name}'")
gives
Description: 'This is a tool that transforms an image with ControlNet according to a prompt.
It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.'
Name: 'image_transformer'
The name and description are accurate and fit the style of the curated set of tools. Next, let’s instantiate an agent with controlnet_transformer and upscaler:
tools = [controlnet_transformer, upscaler]
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools)
This command should give you the following info:
image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0
8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools`
The set of curated tools already has an image_transformer tool which is hereby replaced with our custom tool.
Overwriting existing tools can be beneficial if we want to use a custom tool exactly for the same task as an existing tool because the agent is well-versed in using the specific task. Beware that the custom tool should follow the exact same API as the overwritten tool in this case, or you should adapt the prompt template to make sure all examples using that tool are updated.
The upscaler tool was given the name image_upscaler which is not yet present in the default toolbox and is therefore simply added to the list of tools. You can always have a look at the toolbox that is currently available to the agent via the agent.toolbox attribute:
print("\n".join([f"- {a}" for a in agent.toolbox.keys()]))
- document_qa
- image_captioner
- image_qa
- image_segmenter
- transcriber
- summarizer
- text_classifier
- text_qa
- text_reader
- translator
- image_transformer
- text_downloader
- image_generator
- video_generator
- image_upscaler
Note how image_upscaler is now part of the agents’ toolbox.
Let’s now try out the new tools! We will re-use the image we generated in Transformers Agents Quickstart.
from diffusers.utils import load_image
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png"
)
Let’s transform the image into a beautiful winter landscape:
image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image)
==Explanation from the agent==
I will use the following tool: `image_transformer` to transform the image.
==Code generated by the agent==
image = image_transformer(image, prompt="A frozen lake and snowy forest")
The new image processing tool is based on ControlNet which can make very strong modifications to the image. By default the image processing tool returns an image of size 512x512 pixels. Let’s see if we can upscale it.
image = agent.run("Upscale the image", image)
==Explanation from the agent==
I will use the following tool: `image_upscaler` to upscale the image.
==Code generated by the agent==
upscaled_image = image_upscaler(image)
The agent automatically mapped our prompt “Upscale the image” to the just added upscaler tool purely based on the description and name of the upscaler tool and was able to correctly run it.
Next, let’s have a look at how you can create a new custom tool.
Adding new tools
In this section, we show how to create a new tool that can be added to the agent.
Creating a new tool
We’ll first start by creating a tool. We’ll add the not-so-useful yet fun task of fetching the model on the Hugging Face Hub with the most downloads for a given task.
We can do that with the following code:
from huggingface_hub import list_models
task = "text-classification"
model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
print(model.id)
For the task text-classification, this returns 'facebook/bart-large-mnli', for translation it returns 't5-base.
How do we convert this to a tool that the agent can leverage? All tools depend on the superclass Tool that holds the main attributes necessary. We’ll create a class that inherits from it:
from transformers import Tool
class HFModelDownloadsTool(Tool):
pass
This class has a few needs:
An attribute name, which corresponds to the name of the tool itself. To be in tune with other tools which have a performative name, we’ll name it model_download_counter.
An attribute description, which will be used to populate the prompt of the agent.
inputs and outputs attributes. Defining this will help the python interpreter make educated choices about types, and will allow for a gradio-demo to be spawned when we push our tool to the Hub. They’re both a list of expected values, which can be text, image, or audio.
A __call__ method which contains the inference code. This is the code we’ve played with above!
Here’s what our class looks like now:
from transformers import Tool
from huggingface_hub import list_models
class HFModelDownloadsTool(Tool):
name = "model_download_counter"
description = (
"This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. "
"It takes the name of the category (such as text-classification, depth-estimation, etc), and "
"returns the name of the checkpoint."
)
inputs = ["text"]
outputs = ["text"]
def __call__(self, task: str):
model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
return model.id
We now have our tool handy. Save it in a file and import it from your main script. Let’s name this file model_downloads.py, so the resulting import code looks like this:
from model_downloads import HFModelDownloadsTool
tool = HFModelDownloadsTool()
In order to let others benefit from it and for simpler initialization, we recommend pushing it to the Hub under your namespace. To do so, just call push_to_hub on the tool variable:
tool.push_to_hub("hf-model-downloads")
You now have your code on the Hub! Let’s take a look at the final step, which is to have the agent use it.
Having the agent use the tool
We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool):
from transformers import load_tool
tool = load_tool("lysandre/hf-model-downloads")
In order to use it in the agent, simply pass it in the additional_tools parameter of the agent initialization method:
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool])
agent.run(
"Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
)
which outputs the following:
==Code generated by the agent==
model = model_download_counter(task="text-to-video")
print(f"The model with the most downloads is {model}.")
audio_model = text_reader(model)
==Result==
The model with the most downloads is damo-vilab/text-to-video-ms-1.7b.
and generates the following audio.
Audio
Depending on the LLM, some are quite brittle and require very exact prompts in order to work well. Having a well-defined name and description of the tool is paramount to having it be leveraged by the agent.
Replacing existing tools
Replacing existing tools can be done simply by assigning a new item to the agent’s toolbox. Here’s how one would do so:
from transformers import HfAgent, load_tool
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder")
agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool")
Beware when replacing tools with others! This will also adjust the agent’s prompt. This can be good if you have a better prompt suited for the task, but it can also result in your tool being selected way more than others or for other tools to be selected instead of the one you have defined.
Leveraging gradio-tools
gradio-tools is a powerful library that allows using Hugging Face Spaces as tools. It supports many existing Spaces as well as custom Spaces to be designed with it.
We offer support for gradio_tools by using the Tool.from_gradio method. For example, we want to take advantage of the StableDiffusionPromptGeneratorTool tool offered in the gradio-tools toolkit so as to improve our prompts and generate better images.
We first import the tool from gradio_tools and instantiate it:
from gradio_tools import StableDiffusionPromptGeneratorTool
gradio_tool = StableDiffusionPromptGeneratorTool()
We pass that instance to the Tool.from_gradio method:
from transformers import Tool
tool = Tool.from_gradio(gradio_tool)
Now we can manage it exactly as we would a usual custom tool. We leverage it to improve our prompt a rabbit wearing a space suit:
from transformers import HfAgent
agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool])
agent.run("Generate an image of the `prompt` after improving it.", prompt="A rabbit wearing a space suit")
The model adequately leverages the tool:
==Explanation from the agent==
I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt.
==Code generated by the agent==
improved_prompt = StableDiffusionPromptGenerator(prompt)
print(f"The improved prompt is {improved_prompt}.")
image = image_generator(improved_prompt)
Before finally generating the image:
gradio-tools requires textual inputs and outputs, even when working with different modalities. This implementation works with image and audio objects. The two are currently incompatible, but will rapidly become compatible as we work to improve the support.
Future compatibility with Langchain
We love Langchain and think it has a very compelling suite of tools. In order to handle these tools, Langchain requires textual inputs and outputs, even when working with different modalities. This is often the serialized version (i.e., saved to disk) of the objects.
This difference means that multi-modality isn’t handled between transformers-agents and langchain. We aim for this limitation to be resolved in future versions, and welcome any help from avid langchain users to help us achieve this compatibility.
We would love to have better support. If you would like to help, please open an issue and share what you have in mind. |
https://huggingface.co/docs/transformers/perf_infer_cpu | Efficient Inference on CPU
This guide focuses on inferencing large models efficiently on CPU.
BetterTransformer for faster inference
We have recently integrated BetterTransformer for faster inference on CPU for text, image and audio models. Check the documentation about this integration here for more details.
PyTorch JIT-mode (TorchScript)
TorchScript is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency. Comparing to default eager mode, jit mode in PyTorch normally yields better performance for model inference from optimization methodologies like operator fusion.
For a gentle introduction to TorchScript, see the Introduction to PyTorch TorchScript tutorial.
IPEX Graph Optimization with JIT-mode
Intel® Extension for PyTorch provides further optimizations in jit mode for Transformers series models. It is highly recommended for users to take advantage of Intel® Extension for PyTorch with jit mode. Some frequently used operator patterns from Transformers models are already supported in Intel® Extension for PyTorch with jit mode fusions. Those fusion patterns like Multi-head-attention fusion, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc. are enabled and perform well. The benefit of the fusion is delivered to users in a transparent fashion. According to the analysis, ~70% of most popular NLP tasks in question-answering, text-classification, and token-classification can get performance benefits with these fusion patterns for both Float32 precision and BFloat16 Mixed precision.
Check more detailed information for IPEX Graph Optimization.
IPEX installation:
IPEX release is following PyTorch, check the approaches for IPEX installation.
Usage of JIT-mode
To enable JIT-mode in Trainer for evaluaion or prediction, users should add `jit_mode_eval` in Trainer command arguments.
for PyTorch >= 1.14.0. JIT-mode could benefit any models for prediction and evaluaion since dict input is supported in jit.trace
for PyTorch < 1.14.0. JIT-mode could benefit models whose forward parameter order matches the tuple input order in jit.trace, like question-answering model In the case where the forward parameter order does not match the tuple input order in jit.trace, like text-classification models, jit.trace will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users.
Take an example of the use cases on Transformers question-answering
Inference using jit mode on CPU:
python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
--jit_mode_eval
Inference with IPEX using jit mode on CPU:
python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
--use_ipex \
--jit_mode_eval |
https://huggingface.co/docs/transformers/perf_train_gpu_many | Efficient Training on Multiple GPUs
When training on a single GPU is too slow or the model weights don’t fit in a single GPUs memory we use a multi-GPU setup. Switching from a single GPU to multiple requires some form of parallelism as the work needs to be distributed. There are several techniques to achieve parallism such as data, tensor, or pipeline parallism. However, there is no one solution to fit them all and which settings works best depends on the hardware you are running on. While the main concepts most likely will apply to any other framework, this article is focused on PyTorch-based implementations.
Note: Most of the strategies introduced in the single GPU section (such as mixed precision training or gradient accumulation) are generic and apply to training models in general so make sure to have a look at it before diving into the following sections such as multi-GPU or CPU training.
We will first discuss in depth various 1D parallelism techniques and their pros and cons and then look at how they can be combined into 2D and 3D parallelism to enable an even faster training and to support even bigger models. Various other powerful alternative approaches will be presented.
Concepts
The following is the brief description of the main concepts that will be described later in depth in this document.
DataParallel (DP) - the same setup is replicated multiple times, and each being fed a slice of the data. The processing is done in parallel and all setups are synchronized at the end of each training step.
TensorParallel (TP) - each tensor is split up into multiple chunks, so instead of having the whole tensor reside on a single gpu, each shard of the tensor resides on its designated gpu. During processing each shard gets processed separately and in parallel on different GPUs and the results are synced at the end of the step. This is what one may call horizontal parallelism, as the splitting happens on horizontal level.
PipelineParallel (PP) - the model is split up vertically (layer-level) across multiple GPUs, so that only one or several layers of the model are places on a single gpu. Each gpu processes in parallel different stages of the pipeline and working on a small chunk of the batch.
Zero Redundancy Optimizer (ZeRO) - Also performs sharding of the tensors somewhat similar to TP, except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn’t need to be modified. It also supports various offloading techniques to compensate for limited GPU memory.
Sharded DDP - is another name for the foundational ZeRO concept as used by various other implementations of ZeRO.
Before diving deeper into the specifics of each concept we first have a look at the rough decision process when training large models on a large infrastructure.
Scalability Strategy
⇨ Single Node / Multi-GPU
Model fits onto a single GPU:
DDP - Distributed DP
ZeRO - may or may not be faster depending on the situation and configuration used
Model doesn’t fit onto a single GPU:
PP
ZeRO
TP
With very fast intra-node connectivity of NVLINK or NVSwitch all three should be mostly on par, without these PP will be faster than TP or ZeRO. The degree of TP may also make a difference. Best to experiment to find the winner on your particular setup.
TP is almost always used within a single node. That is TP size <= gpus per node.
Largest Layer not fitting into a single GPU:
If not using ZeRO - must use TP, as PP alone won’t be able to fit.
With ZeRO see the same entry for “Single GPU” above
⇨ Multi-Node / Multi-GPU
When you have fast inter-node connectivity:
ZeRO - as it requires close to no modifications to the model
PP+TP+DP - less communications, but requires massive changes to the model
when you have slow inter-node connectivity and still low on GPU memory:
DP+PP+TP+ZeRO-1
Data Parallelism
Most users with just 2 GPUs already enjoy the increased training speed up thanks to DataParallel (DP) and DistributedDataParallel (DDP) that are almost trivial to use. This is a built-in feature of Pytorch. Note that in general it is advised to use DDP as it is better maintained and works for all models while DP might fail for some models. PyTorch documentation itself recommends the use of DDP.
DP vs DDP
DistributedDataParallel (DDP) is typically faster than DataParallel (DP), but it is not always the case:
while DP is python threads-based, DDP is multiprocess-based - and as such it has no python threads limitations, such as GIL
on the other hand a slow inter-connectivity between the GPU cards could lead to an actual slower outcome with DDP
Here are the main differences in the inter-GPU communication overhead between the two modes:
DDP:
At the start time the main process replicates the model once from gpu 0 to the rest of gpus
Then for each batch:
each gpu consumes each own mini-batch of data directly
during backward, once the local gradients are ready, they are then averaged across all processes
DP:
For each batch:
gpu 0 reads the batch of data and then sends a mini-batch to each gpu
replicates the up-to-date model from gpu 0 to each gpu
runs forward and sends output from each gpu to gpu 0, computes loss
scatters loss from gpu 0 to all gpus, runs backward
sends gradients from each gpu to gpu 0 and averages those
The only communication DDP performs per batch is sending gradients, whereas DP does 5 different data exchanges per batch.
DP copies data within the process via python threads, whereas DDP copies data via torch.distributed.
Under DP gpu 0 performs a lot more work than the rest of the gpus, thus resulting in under-utilization of gpus.
You can use DDP across multiple machines, but this is not the case with DP.
There are other differences between DP and DDP but they aren’t relevant to this discussion.
If you want to go really deep into understanding these 2 modes, this article is highly recommended, as it has great diagrams, includes multiple benchmarks and profiler outputs on various hardware, explains all the nuances that you may need to know.
Let’s look at an actual benchmark:
Type NVlink Time
2:DP Y 110s
2:DDP Y 101s
2:DDP N 131s
Analysis:
Here DP is ~10% slower than DDP w/ NVlink, but ~15% faster than DDP w/o NVlink
The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync, the more a slow link will slow down the total runtime.
Here is the full benchmark code and outputs:
NCCL_P2P_DISABLE=1 was used to disable the NVLink feature on the corresponding benchmark.
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (NV2 in nvidia-smi topo -m) Software: pytorch-1.8-to-be + cuda-11.0 / transformers==4.3.0.dev0
ZeRO Data Parallelism
ZeRO-powered data parallelism (ZeRO-DP) is described on the following diagram from this blog post
It can be difficult to wrap one’s head around it, but in reality the concept is quite simple. This is just the usual DataParallel (DP), except, instead of replicating the full model params, gradients and optimizer states, each GPU stores only a slice of it. And then at run-time when the full layer params are needed just for the given layer, all GPUs synchronize to give each other parts that they miss - this is it.
Consider this simple model with 3 layers, where each layer has 3 params:
La | Lb | Lc ---|----|--- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2
Layer La has weights a0, a1 and a2.
If we have 3 GPUs, the Sharded DDP (= Zero-DP) splits the model onto 3 GPUs like so:
GPU0:
La | Lb | Lc ---|----|--- a0 | b0 | c0 GPU1: La | Lb | Lc ---|----|--- a1 | b1 | c1 GPU2: La | Lb | Lc ---|----|--- a2 | b2 | c2
In a way this is the same horizontal slicing, as tensor parallelism, if you imagine the typical DNN diagram. Vertical slicing is where one puts whole layer-groups on different GPUs. But it’s just the starting point.
Now each of these GPUs will get the usual mini-batch as it works in DP:
x0 => GPU0
x1 => GPU1
x2 => GPU2
The inputs are unmodified - they think they are going to be processed by the normal model.
First, the inputs hit the layer La.
Let’s focus just on GPU0: x0 needs a0, a1, a2 params to do its forward path, but GPU0 has only a0 - it gets sent a1 from GPU1 and a2 from GPU2, bringing all pieces of the model together.
In parallel, GPU1 gets mini-batch x1 and it only has a1, but needs a0 and a2 params, so it gets those from GPU0 and GPU2.
Same happens to GPU2 that gets input x2. It gets a0 and a1 from GPU0 and GPU1, and with its a2 it reconstructs the full tensor.
All 3 GPUs get the full tensors reconstructed and a forward happens.
As soon as the calculation is done, the data that is no longer needed gets dropped - it’s only used during the calculation. The reconstruction is done efficiently via a pre-fetch.
And the whole process is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.
To me this sounds like an efficient group backpacking weight distribution strategy:
person A carries the tent
person B carries the stove
person C carries the axe
Now each night they all share what they have with others and get from others what they don’t have, and in the morning they pack up their allocated type of gear and continue on their way. This is Sharded DDP / Zero DP.
Compare this strategy to the simple one where each person has to carry their own tent, stove and axe, which would be far more inefficient. This is DataParallel (DP and DDP) in Pytorch.
While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned.
If you pay close attention the way ZeRO partitions the model’s weights - it looks very similar to tensor parallelism which will be discussed later. This is because it partitions/shards each layer’s weights, unlike vertical model parallelism which is discussed next.
Implementations:
DeepSpeed ZeRO-DP stages 1+2+3
transformers integration
Naive Model Parallelism (Vertical) and Pipeline Parallelism
Naive Model Parallelism (MP) is where one spreads groups of model layers across multiple GPUs. The mechanism is relatively simple - switch the desired layers .to() the desired devices and now whenever the data goes in and out those layers switch the data to the same device as the layer and leave the rest unmodified.
We refer to it as Vertical MP, because if you remember how most models are drawn, we slice the layers vertically. For example, if the following diagram shows an 8-layer model:
=================== ===================
| 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 |
=================== ===================
gpu0 gpu1
we just sliced it in 2 vertically, placing layers 0-3 onto GPU0 and 4-7 to GPU1.
Now while data travels from layer 0 to 1, 1 to 2 and 2 to 3 this is just the normal model. But when data needs to pass from layer 3 to layer 4 it needs to travel from GPU0 to GPU1 which introduces a communication overhead. If the participating GPUs are on the same compute node (e.g. same physical machine) this copying is pretty fast, but if the GPUs are located on different compute nodes (e.g. multiple machines) the communication overhead could be significantly larger.
Then layers 4 to 5 to 6 to 7 are as a normal model would have and when the 7th layer completes we often need to send the data back to layer 0 where the labels are (or alternatively send the labels to the last layer). Now the loss can be computed and the optimizer can do its work.
Problems:
the main deficiency and why this one is called “naive” MP, is that all but one GPU is idle at any given moment. So if 4 GPUs are used, it’s almost identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. Plus there is the overhead of copying the data between devices. So 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, except the latter will complete the training faster, since it doesn’t have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (but barely because of the gradient and optimizer states)
shared embeddings may need to get copied back and forth between GPUs.
Pipeline Parallelism (PP) is almost identical to a naive MP, but it solves the GPU idling problem, by chunking the incoming batch into micro-batches and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process.
The following illustration from the GPipe paper shows the naive MP on the top, and PP on the bottom:
It’s easy to see from the bottom diagram how PP has less dead zones, where GPUs are idle. The idle parts are referred to as the “bubble”.
Both parts of the diagram show a parallelism that is of degree 4. That is 4 GPUs are participating in the pipeline. So there is the forward path of 4 pipe stages F0, F1, F2 and F3 and then the return reverse order backward path of B3, B2, B1 and B0.
PP introduces a new hyper-parameter to tune and it’s chunks which defines how many chunks of data are sent in a sequence through the same pipe stage. For example, in the bottom diagram you can see that chunks=4. GPU0 performs the same forward path on chunk 0, 1, 2 and 3 (F0,0, F0,1, F0,2, F0,3) and then it waits for other GPUs to do their work and only when their work is starting to be complete, GPU0 starts to work again doing the backward path for chunks 3, 2, 1 and 0 (B0,3, B0,2, B0,1, B0,0).
Note that conceptually this is the same concept as gradient accumulation steps (GAS). Pytorch uses chunks, whereas DeepSpeed refers to the same hyper-parameter as GAS.
Because of the chunks, PP introduces the concept of micro-batches (MBS). DP splits the global data batch size into mini-batches, so if you have a DP degree of 4, a global batch size of 1024 gets split up into 4 mini-batches of 256 each (1024/4). And if the number of chunks (or GAS) is 32 we end up with a micro-batch size of 8 (256/32). Each Pipeline stage works with a single micro-batch at a time.
To calculate the global batch size of the DP + PP setup we then do: mbs*chunks*dp_degree (8*32*4=1024).
Let’s go back to the diagram.
With chunks=1 you end up with the naive MP, which is very inefficient. With a very large chunks value you end up with tiny micro-batch sizes which could be not every efficient either. So one has to experiment to find the value that leads to the highest efficient utilization of the gpus.
While the diagram shows that there is a bubble of “dead” time that can’t be parallelized because the last forward stage has to wait for backward to complete the pipeline, the purpose of finding the best value for chunks is to enable a high concurrent GPU utilization across all participating GPUs which translates to minimizing the size of the bubble.
There are 2 groups of solutions - the traditional Pipeline API and the more modern solutions that make things much easier for the end user.
Traditional Pipeline API solutions:
PyTorch
DeepSpeed
Megatron-LM
Modern solutions:
Varuna
Sagemaker
Problems with traditional Pipeline API solutions:
have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a nn.Sequential sequence of the same, which may require changes to the design of the model.
currently the Pipeline API is very restricted. If you had a bunch of python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have a batch size as the very first dimension, since pipeline is going to chunk the mini batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693
conditional control flow at the level of pipe stages is not possible - e.g., Encoder-Decoder models like T5 require special workarounds to handle a conditional encoder stage.
have to arrange each layer so that the output of one model becomes an input to the other model.
We are yet to experiment with Varuna and SageMaker but their papers report that they have overcome the list of problems mentioned above and that they require much smaller changes to the user’s model.
Implementations:
Pytorch (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some examples
DeepSpeed
Megatron-LM has an internal implementation - no API.
Varuna
SageMaker - this is a proprietary solution that can only be used on AWS.
OSLO - this is implemented based on the Hugging Face Transformers.
🤗 Transformers status: as of this writing none of the models supports full-PP. GPT2 and T5 models have naive MP support. The main obstacle is being unable to convert the models to nn.Sequential and have all the inputs to be Tensors. This is because currently the models include many features that make the conversion very complicated, and will need to be removed to accomplish that.
Other approaches:
DeepSpeed, Varuna and SageMaker use the concept of an Interleaved Pipeline
Here the bubble (idle time) is further minimized by prioritizing backward passes.
Varuna further tries to improve the schedule by using simulations to discover the most efficient scheduling.
OSLO has pipeline parallelism implementation based on the Transformers without nn.Sequential converting.
Tensor Parallelism
In Tensor Parallelism each GPU processes only a slice of a tensor and only aggregates the full tensor for operations that require the whole thing.
In this section we use concepts and diagrams from the Megatron-LM paper: Efficient Large-Scale Language Model Training on GPU Clusters.
The main building block of any transformer is a fully connected nn.Linear followed by a nonlinear activation GeLU.
Following the Megatron’s paper notation, we can write the dot-product part of it as Y = GeLU(XA), where X and Y are the input and output vectors, and A is the weight matrix.
If we look at the computation in matrix form, it’s easy to see how the matrix multiplication can be split between multiple GPUs:
If we split the weight matrix A column-wise across N GPUs and perform matrix multiplications XA_1 through XA_n in parallel, then we will end up with N output vectors Y_1, Y_2, ..., Y_n which can be fed into GeLU independently:
Using this principle, we can update an MLP of arbitrary depth, without the need for any synchronization between GPUs until the very end, where we need to reconstruct the output vector from shards. The Megatron-LM paper authors provide a helpful illustration for that:
Parallelizing the multi-headed attention layers is even simpler, since they are already inherently parallel, due to having multiple independent heads!
Special considerations: TP requires very fast network, and therefore it’s not advisable to do TP across more than one node. Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use nodes that have at least 8 GPUs.
This section is based on the original much more detailed TP overview. by @anton-l.
SageMaker combines TP with DP for a more efficient processing.
Alternative names:
DeepSpeed calls it tensor slicing
Implementations:
Megatron-LM has an internal implementation, as it’s very model-specific
parallelformers (only inference at the moment)
SageMaker - this is a proprietary solution that can only be used on AWS.
OSLO has the tensor parallelism implementation based on the Transformers.
🤗 Transformers status:
core: not yet implemented in the core
but if you want inference parallelformers provides this support for most of our models. So until this is implemented in the core you can use theirs. And hopefully training mode will be supported too.
Deepspeed-Inference also supports our BERT, GPT-2, and GPT-Neo models in their super-fast CUDA-kernel-based inference mode, see more here
DP+PP
The following diagram from the DeepSpeed pipeline tutorial demonstrates how one combines DP with PP.
Here it’s important to see how DP rank 0 doesn’t see GPU2 and DP rank 1 doesn’t see GPU3. To DP there is just GPUs 0 and 1 where it feeds data as if there were just 2 GPUs. GPU0 “secretly” offloads some of its load to GPU2 using PP. And GPU1 does the same by enlisting GPU3 to its aid.
Since each dimension requires at least 2 GPUs, here you’d need at least 4 GPUs.
Implementations:
DeepSpeed
Megatron-LM
Varuna
SageMaker
OSLO
🤗 Transformers status: not yet implemented
DP+PP+TP
To get an even more efficient training a 3D parallelism is used where PP is combined with TP and DP. This can be seen in the following diagram.
This diagram is from a blog post 3D parallelism: Scaling to trillion-parameter models, which is a good read as well.
Since each dimension requires at least 2 GPUs, here you’d need at least 8 GPUs.
Implementations:
DeepSpeed - DeepSpeed also includes an even more efficient DP, which they call ZeRO-DP.
Megatron-LM
Varuna
SageMaker
OSLO
🤗 Transformers status: not yet implemented, since we have no PP and TP.
ZeRO DP+PP+TP
One of the main features of DeepSpeed is ZeRO, which is a super-scalable extension of DP. It has already been discussed in ZeRO Data Parallelism. Normally it’s a standalone feature that doesn’t require PP or TP. But it can be combined with PP and TP.
When ZeRO-DP is combined with PP (and optionally TP) it typically enables only ZeRO stage 1 (optimizer sharding).
While it’s theoretically possible to use ZeRO stage 2 (gradient sharding) with Pipeline Parallelism, it will have bad performance impacts. There would need to be an additional reduce-scatter collective for every micro-batch to aggregate the gradients before sharding, which adds a potentially significant communication overhead. By nature of Pipeline Parallelism, small micro-batches are used and instead the focus is on trying to balance arithmetic intensity (micro-batch size) with minimizing the Pipeline bubble (number of micro-batches). Therefore those communication costs are going to hurt.
In addition, There are already fewer layers than normal due to PP and so the memory savings won’t be huge. PP already reduces gradient size by 1/PP, and so gradient sharding savings on top of that are less significant than pure DP.
ZeRO stage 3 is not a good choice either for the same reason - more inter-node communications required.
And since we have ZeRO, the other benefit is ZeRO-Offload. Since this is stage 1 optimizer states can be offloaded to CPU.
Implementations:
Megatron-DeepSpeed and Megatron-Deepspeed from BigScience, which is the fork of the former repo.
OSLO
Important papers:
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
🤗 Transformers status: not yet implemented, since we have no PP and TP.
FlexFlow
FlexFlow also solves the parallelization problem in a slightly different approach.
Paper: “Beyond Data and Model Parallelism for Deep Neural Networks” by Zhihao Jia, Matei Zaharia, Alex Aiken
It performs a sort of 4D Parallelism over Sample-Operator-Attribute-Parameter.
Sample = Data Parallelism (sample-wise parallel)
Operator = Parallelize a single operation into several sub-operations
Attribute = Data Parallelism (length-wise parallel)
Parameter = Model Parallelism (regardless of dimension - horizontal or vertical)
Examples:
Sample
Let’s take 10 batches of sequence length 512. If we parallelize them by sample dimension into 2 devices, we get 10 x 512 which becomes be 5 x 2 x 512.
Operator
If we perform layer normalization, we compute std first and mean second, and then we can normalize data. Operator parallelism allows computing std and mean in parallel. So if we parallelize them by operator dimension into 2 devices (cuda:0, cuda:1), first we copy input data into both devices, and cuda:0 computes std, cuda:1 computes mean at the same time.
Attribute
We have 10 batches of 512 length. If we parallelize them by attribute dimension into 2 devices, 10 x 512 will be 10 x 2 x 256.
Parameter
It is similar with tensor model parallelism or naive layer-wise model parallelism.
The significance of this framework is that it takes resources like (1) GPU/TPU/CPU vs. (2) RAM/DRAM vs. (3) fast-intra-connect/slow-inter-connect and it automatically optimizes all these algorithmically deciding which parallelisation to use where.
One very important aspect is that FlexFlow is designed for optimizing DNN parallelizations for models with static and fixed workloads, since models with dynamic behavior may prefer different parallelization strategies across iterations.
So the promise is very attractive - it runs a 30min simulation on the cluster of choice and it comes up with the best strategy to utilise this specific environment. If you add/remove/replace any parts it’ll run and re-optimize the plan for that. And then you can train. A different setup will have its own custom optimization.
🤗 Transformers status: not yet integrated. We already have our models FX-trace-able via transformers.utils.fx, which is a prerequisite for FlexFlow, so someone needs to figure out what needs to be done to make FlexFlow work with our models.
Which Strategy To Use When
Here is a very rough outline at which parallelism strategy to use when. The first on each list is typically faster.
⇨ Single GPU
Model fits onto a single GPU:
Normal use
Model doesn’t fit onto a single GPU:
ZeRO + Offload CPU and optionally NVMe
as above plus Memory Centric Tiling (see below for details) if the largest layer can’t fit into a single GPU
Largest Layer not fitting into a single GPU:
ZeRO - Enable Memory Centric Tiling (MCT). It allows you to run arbitrarily large layers by automatically splitting them and executing them sequentially. MCT reduces the number of parameters that are live on a GPU, but it does not affect the activation memory. As this need is very rare as of this writing a manual override of torch.nn.Linear needs to be done by the user.
⇨ Single Node / Multi-GPU
Model fits onto a single GPU:
DDP - Distributed DP
ZeRO - may or may not be faster depending on the situation and configuration used
Model doesn’t fit onto a single GPU:
PP
ZeRO
TP
With very fast intra-node connectivity of NVLINK or NVSwitch all three should be mostly on par, without these PP will be faster than TP or ZeRO. The degree of TP may also make a difference. Best to experiment to find the winner on your particular setup.
TP is almost always used within a single node. That is TP size <= gpus per node.
Largest Layer not fitting into a single GPU:
If not using ZeRO - must use TP, as PP alone won’t be able to fit.
With ZeRO see the same entry for “Single GPU” above
⇨ Multi-Node / Multi-GPU
When you have fast inter-node connectivity:
ZeRO - as it requires close to no modifications to the model
PP+TP+DP - less communications, but requires massive changes to the model
when you have slow inter-node connectivity and still low on GPU memory:
DP+PP+TP+ZeRO-1 |
https://huggingface.co/docs/transformers/troubleshooting | Troubleshoot
Sometimes errors occur, but we are here to help! This guide covers some of the most common issues we’ve seen and how you can resolve them. However, this guide isn’t meant to be a comprehensive collection of every 🤗 Transformers issue. For more help with troubleshooting your issue, try:
Asking for help on the forums. There are specific categories you can post your question to, like Beginners or 🤗 Transformers. Make sure you write a good descriptive forum post with some reproducible code to maximize the likelihood that your problem is solved!
Create an Issue on the 🤗 Transformers repository if it is a bug related to the library. Try to include as much information describing the bug as possible to help us better figure out what’s wrong and how we can fix it.
Check the Migration guide if you use an older version of 🤗 Transformers since some important changes have been introduced between versions.
For more details about troubleshooting and getting help, take a look at Chapter 8 of the Hugging Face course.
Firewalled environments
Some GPU instances on cloud and intranet setups are firewalled to external connections, resulting in a connection error. When your script attempts to download model weights or datasets, the download will hang and then timeout with the following message:
ValueError: Connection error, and we cannot find the requested files in the cached path.
Please try again or make sure your Internet connection is on.
In this case, you should try to run 🤗 Transformers on offline mode to avoid the connection error.
CUDA out of memory
Training large models with millions of parameters can be challenging without the appropriate hardware. A common error you may encounter when the GPU runs out of memory is:
CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch)
Here are some potential solutions you can try to lessen memory use:
Reduce the per_device_train_batch_size value in TrainingArguments.
Try using gradient_accumulation_steps in TrainingArguments to effectively increase overall batch size.
Refer to the Performance guide for more details about memory-saving techniques.
Unable to load a saved TensorFlow model
TensorFlow’s model.save method will save the entire model - architecture, weights, training configuration - in a single file. However, when you load the model file again, you may run into an error because 🤗 Transformers may not load all the TensorFlow-related objects in the model file. To avoid issues with saving and loading TensorFlow models, we recommend you:
Save the model weights as a h5 file extension with model.save_weights and then reload the model with from_pretrained():
>>> from transformers import TFPreTrainedModel
>>> from tensorflow import keras
>>> model.save_weights("some_folder/tf_model.h5")
>>> model = TFPreTrainedModel.from_pretrained("some_folder")
Save the model with ~TFPretrainedModel.save_pretrained and load it again with from_pretrained():
>>> from transformers import TFPreTrainedModel
>>> model.save_pretrained("path_to/model")
>>> model = TFPreTrainedModel.from_pretrained("path_to/model")
ImportError
Another common error you may encounter, especially if it is a newly released model, is ImportError:
ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location)
For these error types, check to make sure you have the latest version of 🤗 Transformers installed to access the most recent models:
pip install transformers --upgrade
CUDA error: device-side assert triggered
Sometimes you may run into a generic CUDA error about an error in the device code.
RuntimeError: CUDA error: device-side assert triggered
You should try to run the code on a CPU first to get a more descriptive error message. Add the following environment variable to the beginning of your code to switch to a CPU:
>>> import os
>>> os.environ["CUDA_VISIBLE_DEVICES"] = ""
Another option is to get a better traceback from the GPU. Add the following environment variable to the beginning of your code to get the traceback to point to the source of the error:
>>> import os
>>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
Incorrect output when padding tokens aren't masked
In some cases, the output hidden_state may be incorrect if the input_ids include padding tokens. To demonstrate, load a model and tokenizer. You can access a model’s pad_token_id to see its value. The pad_token_id may be None for some models, but you can always manually set it.
>>> from transformers import AutoModelForSequenceClassification
>>> import torch
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
>>> model.config.pad_token_id
0
The following example shows the output without masking the padding tokens:
>>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
[ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>)
Here is the actual output of the second sequence:
>>> input_ids = torch.tensor([[7592]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)
Most of the time, you should provide an attention_mask to your model to ignore the padding tokens to avoid this silent error. Now the output of the second sequence matches its actual output:
By default, the tokenizer creates an attention_mask for you based on your specific tokenizer’s defaults.
>>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]])
>>> output = model(input_ids, attention_mask=attention_mask)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)
🤗 Transformers doesn’t automatically create an attention_mask to mask a padding token if it is provided because:
Some models don’t have a padding token.
For some use-cases, users want a model to attend to a padding token.
ValueError: Unrecognized configuration class XYZ for this kind of AutoModel
Generally, we recommend using the AutoModel class to load pretrained instances of models. This class can automatically infer and load the correct architecture from a given checkpoint based on the configuration. If you see this ValueError when loading a model from a checkpoint, this means the Auto class couldn’t find a mapping from the configuration in the given checkpoint to the kind of model you are trying to load. Most commonly, this happens when a checkpoint doesn’t support a given task. For instance, you’ll see this error in the following example because there is no GPT2 for question answering:
>>> from transformers import AutoProcessor, AutoModelForQuestionAnswering
>>> processor = AutoProcessor.from_pretrained("gpt2-medium")
>>> model = AutoModelForQuestionAnswering.from_pretrained("gpt2-medium")
ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ... |
https://huggingface.co/docs/transformers/perf_infer_special | Transformers documentation
Inference on Specialized Hardware
Join the Hugging Face community
and get access to the augmented documentation experience
Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes
Inference on Specialized Hardware
This document will be completed soon with information on how to infer on specialized hardware. In the meantime you can check out the guide for inference on CPUs. |
https://huggingface.co/docs/transformers/perf_infer_gpu_many | Efficient Inference on a Multiple GPUs
This document contains information on how to efficiently infer on a multiple GPUs.
Note: A multi GPU setup can use the majority of the strategies described in the single GPU section. You must be aware of simple techniques, though, that can be used for a better usage.
Flash Attention 2
Flash Attention 2 integration also works in a multi-GPU setup, check out the appropriate section in the single GPU section
BetterTransformer
BetterTransformer converts 🤗 Transformers models to use the PyTorch-native fastpath execution, which calls optimized kernels like Flash Attention under the hood.
BetterTransformer is also supported for faster inference on single and multi-GPU for text, image, and audio models.
Flash Attention can only be used for models using fp16 or bf16 dtype. Make sure to cast your model to the appropriate dtype before using BetterTransformer.
Decoder models
For text models, especially decoder-based models (GPT, T5, Llama, etc.), the BetterTransformer API converts all attention operations to use the torch.nn.functional.scaled_dot_product_attention operator (SDPA) that is only available in PyTorch 2.0 and onwards.
To convert a model to BetterTransformer:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
model.to_bettertransformer()
SDPA can also call Flash Attention kernels under the hood. To enable Flash Attention or to check that it is available in a given setting (hardware, problem size), use torch.backends.cuda.sdp_kernel as a context manager:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m").to("cuda")
# convert the model to BetterTransformer
model.to_bettertransformer()
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
If you see a bug with a traceback saying
RuntimeError: No available kernel. Aborting execution.
try using the PyTorch nightly version, which may have a broader coverage for Flash Attention:
pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
Have a look at this blog post to learn more about what is possible with the BetterTransformer + SDPA API.
Encoder models
For encoder models during inference, BetterTransformer dispatches the forward call of encoder layers to an equivalent of torch.nn.TransformerEncoderLayer that will execute the fastpath implementation of the encoder layers.
Because torch.nn.TransformerEncoderLayer fastpath does not support training, it is dispatched to torch.nn.functional.scaled_dot_product_attention instead, which does not leverage nested tensors but can use Flash Attention or Memory-Efficient Attention fused kernels.
More details about BetterTransformer performance can be found in this blog post, and you can learn more about BetterTransformer for encoder models in this blog.
Advanced usage: mixing FP4 (or Int8) and BetterTransformer
You can combine the different methods described above to get the best performance for your model. For example, you can use BetterTransformer with FP4 mixed-precision inference + flash attention:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config)
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
https://huggingface.co/docs/transformers/model_doc/bartpho | BARTpho
Overview
The BARTpho model was proposed in BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
The abstract from the paper is the following:
We present BARTpho with two versions — BARTpho_word and BARTpho_syllable — the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the “large” architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks.
Example of use:
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable")
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable")
>>> line = "Chúng tôi là những nghiên cứu viên."
>>> input_ids = tokenizer(line, return_tensors="pt")
>>> with torch.no_grad():
... features = bartpho(**input_ids)
>>>
>>> from transformers import TFAutoModel
>>> bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable")
>>> input_ids = tokenizer(line, return_tensors="tf")
>>> features = bartpho(**input_ids)
Tips:
Following mBART, BARTpho uses the “large” architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the documentation of BART, when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example:
>>> from transformers import MBartForConditionalGeneration
>>> bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable")
>>> TXT = "Chúng tôi là <mask> nghiên cứu viên."
>>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
>>> logits = bartpho(input_ids).logits
>>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
>>> probs = logits[0, masked_index].softmax(dim=0)
>>> values, predictions = probs.topk(5)
>>> print(tokenizer.decode(predictions).split())
This implementation is only for tokenization: “monolingual_vocab_file” consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model “vocab_file” that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model “vocab_file” for subword segmentation, can reuse BartphoTokenizer with their own language-specialized “monolingual_vocab_file”.
This model was contributed by dqnguyen. The original code can be found here.
BartphoTokenizer
class transformers.BartphoTokenizer
< source >
( vocab_file monolingual_vocab_file bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs )
Parameters
vocab_file (str) — Path to the vocabulary file. This vocabulary is the pre-trained SentencePiece model available from the multilingual XLM-RoBERTa, also used in mBART, consisting of 250K types.
monolingual_vocab_file (str) — Path to the monolingual vocabulary file. This monolingual vocabulary consists of Vietnamese-specialized types extracted from the multilingual vocabulary vocab_file of 250K types.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) — Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.
sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from XLMRobertaTokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An BARTPho sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
Converts a sequence of tokens (strings for sub-words) in a single string.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BARTPho does not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. |
https://huggingface.co/docs/transformers/tf_xla | XLA Integration for TensorFlow Models
Accelerated Linear Algebra, dubbed XLA, is a compiler for accelerating the runtime of TensorFlow Models. From the official documentation:
XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes.
Using XLA in TensorFlow is simple – it comes packaged inside the tensorflow library, and it can be triggered with the jit_compile argument in any graph-creating function such as tf.function. When using Keras methods like fit() and predict(), you can enable XLA simply by passing the jit_compile argument to model.compile(). However, XLA is not limited to these methods - it can also be used to accelerate any arbitrary tf.function.
Several TensorFlow methods in 🤗 Transformers have been rewritten to be XLA-compatible, including text generation for models such as GPT2, T5 and OPT, as well as speech processing for models such as Whisper.
While the exact amount of speed-up is very much model-dependent, for TensorFlow text generation models inside 🤗 Transformers, we noticed a speed-up of ~100x. This document will explain how you can use XLA for these models to get the maximum amount of performance. We’ll also provide links to additional resources if you’re interested to learn more about the benchmarks and our design philosophy behind the XLA integration.
Running TF functions with XLA
Let us consider the following model in TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential(
[tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")]
)
The above model accepts inputs having a dimension of (10, ). We can use the model for running a forward pass like so:
batch_size = 16
input_vector_dim = 10
random_inputs = tf.random.normal((batch_size, input_vector_dim))
_ = model(random_inputs)
In order to run the forward pass with an XLA-compiled function, we’d need to do:
xla_fn = tf.function(model, jit_compile=True)
_ = xla_fn(random_inputs)
The default call() function of the model is used for compiling the XLA graph. But if there’s any other model function you want to compile into XLA that’s also possible with:
my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True)
Running a TF text generation model with XLA from 🤗 Transformers
To enable XLA-accelerated generation within 🤗 Transformers, you need to have a recent version of transformers installed. You can install it by running:
pip install transformers --upgrade
And then you can run the following code:
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
from transformers.utils import check_min_version
check_min_version("4.21.0")
tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
input_string = ["TensorFlow is"]
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_string, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
As you can notice, enabling XLA on generate() is just a single line of code. The rest of the code remains unchanged. However, there are a couple of gotchas in the above code snippet that are specific to XLA. You need to be aware of those to realize the speed-ups that XLA can bring in. We discuss these in the following section.
Gotchas to be aware of
When you are executing an XLA-enabled function (like xla_generate() above) for the first time, it will internally try to infer the computation graph, which is time-consuming. This process is known as “tracing”.
You might notice that the generation time is not fast. Successive calls of xla_generate() (or any other XLA-enabled function) won’t have to infer the computation graph, given the inputs to the function follow the same shape with which the computation graph was initially built. While this is not a problem for modalities with fixed input shapes (e.g., images), you must pay attention if you are working with variable input shape modalities (e.g., text).
To ensure xla_generate() always operates with the same input shapes, you can specify the padding arguments when calling the tokenizer.
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
input_string = ["TensorFlow is"]
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
This way, you can ensure that the inputs to xla_generate() will always receive inputs with the shape it was traced with and thus leading to speed-ups in the generation time. You can verify this with the code below:
import time
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
xla_generate = tf.function(model.generate, jit_compile=True)
for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]:
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
start = time.time_ns()
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
end = time.time_ns()
print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n")
On a Tesla T4 GPU, you can expect the outputs like so:
Execution time -- 30819.6 ms
Execution time -- 79.0 ms
Execution time -- 78.9 ms
The first call to xla_generate() is time-consuming because of tracing, but the successive calls are orders of magnitude faster. Keep in mind that any change in the generation options at any point with trigger re-tracing and thus leading to slow-downs in the generation time.
We didn’t cover all the text generation options 🤗 Transformers provides in this document. We encourage you to read the documentation for advanced use cases.
Additional Resources
Here, we leave you with some additional resources if you want to delve deeper into XLA in 🤗 Transformers and in general.
This Colab Notebook provides an interactive demonstration if you want to fiddle with the XLA-compatible encoder-decoder (like T5) and decoder-only (like GPT2) text generation models.
This blog post provides an overview of the comparison benchmarks for XLA-compatible models along with a friendly introduction to XLA in TensorFlow.
This blog post discusses our design philosophy behind adding XLA support to the TensorFlow models in 🤗 Transformers.
Recommended posts for learning more about XLA and TensorFlow graphs in general:
XLA: Optimizing Compiler for Machine Learning
Introduction to graphs and tf.function
Better performance with tf.function |
https://huggingface.co/docs/transformers/add_tensorflow_model | How to convert a 🤗 Transformers model to TensorFlow?
Having multiple frameworks available to use with 🤗 Transformers gives you flexibility to play their strengths when designing your application, but it implies that compatibility must be added on a per-model basis. The good news is that adding TensorFlow compatibility to an existing model is simpler than adding a new model from scratch! Whether you wish to have a deeper understanding of large TensorFlow models, make a major open-source contribution, or enable TensorFlow for your model of choice, this guide is for you.
This guide empowers you, a member of our community, to contribute TensorFlow model weights and/or architectures to be used in 🤗 Transformers, with minimal supervision from the Hugging Face team. Writing a new model is no small feat, but hopefully this guide will make it less of a rollercoaster 🎢 and more of a walk in the park 🚶. Harnessing our collective experiences is absolutely critical to make this process increasingly easier, and thus we highly encourage that you suggest improvements to this guide!
Before you dive deeper, it is recommended that you check the following resources if you’re new to 🤗 Transformers:
General overview of 🤗 Transformers
Hugging Face’s TensorFlow Philosophy
In the remainder of this guide, you will learn what’s needed to add a new TensorFlow model architecture, the procedure to convert PyTorch into TensorFlow model weights, and how to efficiently debug mismatches across ML frameworks. Let’s get started!
Are you unsure whether the model you wish to use already has a corresponding TensorFlow architecture?
Check the model_type field of the config.json of your model of choice (example). If the corresponding model folder in 🤗 Transformers has a file whose name starts with “modeling_tf”, it means that it has a corresponding TensorFlow architecture (example).
Step-by-step guide to add TensorFlow model architecture code
There are many ways to design a large model architecture, and multiple ways of implementing said design. However, you might recall from our general overview of 🤗 Transformers that we are an opinionated bunch - the ease of use of 🤗 Transformers relies on consistent design choices. From experience, we can tell you a few important things about adding TensorFlow models:
Don’t reinvent the wheel! More often than not, there are at least two reference implementations you should check: the PyTorch equivalent of the model you are implementing and other TensorFlow models for the same class of problems.
Great model implementations survive the test of time. This doesn’t happen because the code is pretty, but rather because the code is clear, easy to debug and build upon. If you make the life of the maintainers easy with your TensorFlow implementation, by replicating the same patterns as in other TensorFlow models and minimizing the mismatch to the PyTorch implementation, you ensure your contribution will be long lived.
Ask for help when you’re stuck! The 🤗 Transformers team is here to help, and we’ve probably found solutions to the same problems you’re facing.
Here’s an overview of the steps needed to add a TensorFlow model architecture:
Select the model you wish to convert
Prepare transformers dev environment
(Optional) Understand theoretical aspects and the existing implementation
Implement the model architecture
Implement model tests
Submit the pull request
(Optional) Build demos and share with the world
1.-3. Prepare your model contribution
1. Select the model you wish to convert
Let’s start off with the basics: the first thing you need to know is the architecture you want to convert. If you don’t have your eyes set on a specific architecture, asking the 🤗 Transformers team for suggestions is a great way to maximize your impact - we will guide you towards the most prominent architectures that are missing on the TensorFlow side. If the specific model you want to use with TensorFlow already has a TensorFlow architecture implementation in 🤗 Transformers but is lacking weights, feel free to jump straight into the weight conversion section of this page.
For simplicity, the remainder of this guide assumes you’ve decided to contribute with the TensorFlow version of BrandNewBert (the same example as in the guide to add a new model from scratch).
Before starting the work on a TensorFlow model architecture, double-check that there is no ongoing effort to do so. You can search for BrandNewBert on the pull request GitHub page to confirm that there is no TensorFlow-related pull request.
2. Prepare transformers dev environment
Having selected the model architecture, open a draft PR to signal your intention to work on it. Follow the instructions below to set up your environment and open a draft PR.
Fork the repository by clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code under your GitHub user account.
Clone your transformers fork to your local disk, and add the base repository as a remote:
git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
Set up a development environment, for instance by running the following command:
python -m venv .env
source .env/bin/activate
pip install -e ".[dev]"
Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that’s the case make sure to install TensorFlow then do:
pip install -e ".[quality]"
Note: You don’t need to have CUDA installed. Making the new model work on CPU is sufficient.
Create a branch with a descriptive name from your main branch
git checkout -b add_tf_brand_new_bert
Fetch and rebase to current main
git fetch upstream
git rebase upstream/main
Add an empty .py file in transformers/src/models/brandnewbert/ named modeling_tf_brandnewbert.py. This will be your TensorFlow model file.
Push the changes to your account using:
git add .
git commit -m "initial commit"
git push -u origin add_tf_brand_new_bert
Once you are satisfied, go to the webpage of your fork on GitHub. Click on “Pull request”. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes.
Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page.
Now you have set up a development environment to port BrandNewBert to TensorFlow in 🤗 Transformers.
3. (Optional) Understand theoretical aspects and the existing implementation
You should take some time to read BrandNewBert’s paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don’t worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in 🤗 Transformers using TensorFlow. That being said, you don’t have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely the existing model documentation page (e.g. model docs for BERT).
After you’ve grasped the basics of the models you are about to implement, it’s important to understand the existing implementation. This is a great chance to confirm that a working implementation matches your expectations for the model, as well as to foresee technical challenges on the TensorFlow side.
It’s perfectly natural that you feel overwhelmed with the amount of information that you’ve just absorbed. It is definitely not a requirement that you understand all facets of the model at this stage. Nevertheless, we highly encourage you to clear any pressing questions in our forum.
4. Model implementation
Now it’s time to finally start coding. Our suggested starting point is the PyTorch file itself: copy the contents of modeling_brand_new_bert.py inside src/transformers/models/brand_new_bert/ into modeling_tf_brand_new_bert.py. The goal of this section is to modify the file and update the import structure of 🤗 Transformers such that you can import TFBrandNewBert and TFBrandNewBert.from_pretrained(model_repo, from_pt=True) successfully loads a working TensorFlow BrandNewBert model.
Sadly, there is no prescription to convert a PyTorch model into TensorFlow. You can, however, follow our selection of tips to make the process as smooth as possible:
Prepend TF to the name of all classes (e.g. BrandNewBert becomes TFBrandNewBert).
Most PyTorch operations have a direct TensorFlow replacement. For example, torch.nn.Linear corresponds to tf.keras.layers.Dense, torch.nn.Dropout corresponds to tf.keras.layers.Dropout, etc. If you’re not sure about a specific operation, you can use the TensorFlow documentation or the PyTorch documentation.
Look for patterns in the 🤗 Transformers codebase. If you come across a certain operation that doesn’t have a direct replacement, the odds are that someone else already had the same problem.
By default, keep the same variable names and structure as in PyTorch. This will make it easier to debug, track issues, and add fixes down the line.
Some layers have different default values in each framework. A notable example is the batch normalization layer’s epsilon (1e-5 in PyTorch and 1e-3 in TensorFlow). Double-check the documentation!
PyTorch’s nn.Parameter variables typically need to be initialized within TF Layer’s build(). See the following example: PyTorch / TensorFlow
If the PyTorch model has a #copied from ... on top of a function, the odds are that your TensorFlow model can also borrow that function from the architecture it was copied from, assuming it has a TensorFlow architecture.
Assigning the name attribute correctly in TensorFlow functions is critical to do the from_pt=True weight cross-loading. name is almost always the name of the corresponding variable in the PyTorch code. If name is not properly set, you will see it in the error message when loading the model weights.
The logic of the base model class, BrandNewBertModel, will actually reside in TFBrandNewBertMainLayer, a Keras layer subclass (example). TFBrandNewBertModel will simply be a wrapper around this layer.
Keras models need to be built in order to load pretrained weights. For that reason, TFBrandNewBertPreTrainedModel will need to hold an example of inputs to the model, the dummy_inputs (example).
If you get stuck, ask for help - we’re here to help you! 🤗
In addition to the model file itself, you will also need to add the pointers to the model classes and related documentation pages. You can complete this part entirely following the patterns in other PRs (example). Here’s a list of the needed manual changes:
Include all public classes of BrandNewBert in src/transformers/__init__.py
Add BrandNewBert classes to the corresponding Auto classes in src/transformers/models/auto/modeling_tf_auto.py
Add the lazy loading classes related to BrandNewBert in src/transformers/utils/dummy_tf_objects.py
Update the import structures for the public classes in src/transformers/models/brand_new_bert/__init__.py
Add the documentation pointers to the public methods of BrandNewBert in docs/source/en/model_doc/brand_new_bert.md
Add yourself to the list of contributors to BrandNewBert in docs/source/en/model_doc/brand_new_bert.md
Finally, add a green tick ✅ to the TensorFlow column of BrandNewBert in docs/source/en/index.md
When you’re happy with your implementation, run the following checklist to confirm that your model architecture is ready:
All layers that behave differently at train time (e.g. Dropout) are called with a training argument, which is propagated all the way from the top-level classes
You have used #copied from ... whenever possible
TFBrandNewBertMainLayer and all classes that use it have their call function decorated with @unpack_inputs
TFBrandNewBertMainLayer is decorated with @keras_serializable
A TensorFlow model can be loaded from PyTorch weights using TFBrandNewBert.from_pretrained(model_repo, from_pt=True)
You can call the TensorFlow model using the expected input format
5. Add model tests
Hurray, you’ve implemented a TensorFlow model! Now it’s time to add tests to make sure that your model behaves as expected. As in the previous section, we suggest you start by copying the test_modeling_brand_new_bert.py file in tests/models/brand_new_bert/ into test_modeling_tf_brand_new_bert.py, and continue by making the necessary TensorFlow replacements. For now, in all .from_pretrained() calls, you should use the from_pt=True flag to load the existing PyTorch weights.
After you’re done, it’s time for the moment of truth: run the tests! 😬
NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \
py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py
The most likely outcome is that you’ll see a bunch of errors. Don’t worry, this is expected! Debugging ML models is notoriously hard, and the key ingredient to success is patience (and breakpoint()). In our experience, the hardest problems arise from subtle mismatches between ML frameworks, for which we have a few pointers at the end of this guide. In other cases, a general test might not be directly applicable to your model, in which case we suggest an override at the model test class level. Regardless of the issue, don’t hesitate to ask for help in your draft pull request if you’re stuck.
When all tests pass, congratulations, your model is nearly ready to be added to the 🤗 Transformers library! 🎉
6.-7. Ensure everyone can use your model
6. Submit the pull request
Once you’re done with the implementation and the tests, it’s time to submit a pull request. Before pushing your code, run our code formatting utility, make fixup 🪄. This will automatically fix any formatting issues, which would cause our automatic checks to fail.
It’s now time to convert your draft pull request into a real pull request. To do so, click on the “Ready for review” button and add Joao (@gante) and Matt (@Rocketknight1) as reviewers. A model pull request will need at least 3 reviewers, but they will take care of finding appropriate additional reviewers for your model.
After all reviewers are happy with the state of your PR, the final action point is to remove the from_pt=True flag in .from_pretrained() calls. Since there are no TensorFlow weights, you will have to add them! Check the section below for instructions on how to do it.
Finally, when the TensorFlow weights get merged, you have at least 3 reviewer approvals, and all CI checks are green, double-check the tests locally one last time
NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \
py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py
and we will merge your PR! Congratulations on the milestone 🎉
7. (Optional) Build demos and share with the world
One of the hardest parts about open-source is discovery. How can the other users learn about the existence of your fabulous TensorFlow contribution? With proper communication, of course! 📣
There are two main ways to share your model with the community:
Build demos. These include Gradio demos, notebooks, and other fun ways to show off your model. We highly encourage you to add a notebook to our community-driven demos.
Share stories on social media like Twitter and LinkedIn. You should be proud of your work and share your achievement with the community - your model can now be used by thousands of engineers and researchers around the world 🌍! We will be happy to retweet your posts and help you share your work with the community.
Adding TensorFlow weights to 🤗 Hub
Assuming that the TensorFlow model architecture is available in 🤗 Transformers, converting PyTorch weights into TensorFlow weights is a breeze!
Here’s how to do it:
Make sure you are logged into your Hugging Face account in your terminal. You can log in using the command huggingface-cli login (you can find your access tokens here)
Run transformers-cli pt-to-tf --model-name foo/bar, where foo/bar is the name of the model repository containing the PyTorch weights you want to convert
Tag @joaogante and @Rocketknight1 in the 🤗 Hub PR the command above has just created
That’s it! 🎉
Debugging mismatches across ML frameworks 🐛
At some point, when adding a new architecture or when creating TensorFlow weights for an existing architecture, you might come across errors complaining about mismatches between PyTorch and TensorFlow. You might even decide to open the model architecture code for the two frameworks, and find that they look identical. What’s going on? 🤔
First of all, let’s talk about why understanding these mismatches matters. Many community members will use 🤗 Transformers models out of the box, and trust that our models behave as expected. When there is a large mismatch between the two frameworks, it implies that the model is not following the reference implementation for at least one of the frameworks. This might lead to silent failures, in which the model runs but has poor performance. This is arguably worse than a model that fails to run at all! To that end, we aim at having a framework mismatch smaller than 1e-5 at all stages of the model.
As in other numerical problems, the devil is in the details. And as in any detail-oriented craft, the secret ingredient here is patience. Here is our suggested workflow for when you come across this type of issues:
Locate the source of mismatches. The model you’re converting probably has near identical inner variables up to a certain point. Place breakpoint() statements in the two frameworks’ architectures, and compare the values of the numerical variables in a top-down fashion until you find the source of the problems.
Now that you’ve pinpointed the source of the issue, get in touch with the 🤗 Transformers team. It is possible that we’ve seen a similar problem before and can promptly provide a solution. As a fallback, scan popular pages like StackOverflow and GitHub issues.
If there is no solution in sight, it means you’ll have to go deeper. The good news is that you’ve located the issue, so you can focus on the problematic instruction, abstracting away the rest of the model! The bad news is that you’ll have to venture into the source implementation of said instruction. In some cases, you might find an issue with a reference implementation - don’t abstain from opening an issue in the upstream repository.
In some cases, in discussion with the 🤗 Transformers team, we might find that fixing the mismatch is infeasible. When the mismatch is very small in the output layers of the model (but potentially large in the hidden states), we might decide to ignore it in favor of distributing the model. The pt-to-tf CLI mentioned above has a --max-error flag to override the error message at weight conversion time. |
https://huggingface.co/docs/transformers/debugging | Debugging
Multi-GPU Network Issues Debug
When training or inferencing with DistributedDataParallel and multiple GPU, if you run into issue of inter-communication between processes and/or nodes, you can use the following script to diagnose network issues.
wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py
For example to test how 2 GPUs interact do:
python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py
If both processes can talk to each and allocate GPU memory each will print an OK status.
For more GPUs or nodes adjust the arguments in the script.
You will find a lot more details inside the diagnostics script and even a recipe to how you could run it in a SLURM environment.
An additional level of debug is to add NCCL_DEBUG=INFO environment variable as follows:
NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py
This will dump a lot of NCCL-related debug information, which you can then search online if you find that some problems are reported. Or if you’re not sure how to interpret the output you can share the log file in an Issue.
Underflow and Overflow Detection
This feature is currently available for PyTorch-only.
For multi-GPU training it requires DDP (torch.distributed.launch).
This feature can be used with any nn.Module-based model.
If you start getting loss=NaN or the model inhibits some other abnormal behavior due to inf or nan in activations or weights one needs to discover where the first underflow or overflow happens and what led to it. Luckily you can accomplish that easily by activating a special module that will do the detection automatically.
If you’re using Trainer, you just need to add:
--debug underflow_overflow
to the normal command line arguments, or pass debug="underflow_overflow" when creating the TrainingArguments object.
If you’re using your own training loop or another Trainer you can accomplish the same with:
from transformers.debug_utils import DebugUnderflowOverflow
debug_overflow = DebugUnderflowOverflow(model)
DebugUnderflowOverflow inserts hooks into the model that immediately after each forward call will test input and output variables and also the corresponding module’s weights. As soon as inf or nan is detected in at least one element of the activations or weights, the program will assert and print a report like this (this was caught with google/mt5-small under fp16 mixed precision):
Detected inf/nan during batch_number=0
Last 21 forward frames:
abs min abs max metadata
encoder.block.1.layer.1.DenseReluDense.dropout Dropout
0.00e+00 2.57e+02 input[0]
0.00e+00 2.85e+02 output [...]
encoder.block.2.layer.0 T5LayerSelfAttention
6.78e-04 3.15e+03 input[0]
2.65e-04 3.42e+03 output[0]
None output[1]
2.25e-01 1.00e+04 output[2]
encoder.block.2.layer.1.layer_norm T5LayerNorm
8.69e-02 4.18e-01 weight
2.65e-04 3.42e+03 input[0]
1.79e-06 4.65e+00 output
encoder.block.2.layer.1.DenseReluDense.wi_0 Linear
2.17e-07 4.50e+00 weight
1.79e-06 4.65e+00 input[0]
2.68e-06 3.70e+01 output
encoder.block.2.layer.1.DenseReluDense.wi_1 Linear
8.08e-07 2.66e+01 weight
1.79e-06 4.65e+00 input[0]
1.27e-04 2.37e+02 output
encoder.block.2.layer.1.DenseReluDense.dropout Dropout
0.00e+00 8.76e+03 input[0]
0.00e+00 9.74e+03 output
encoder.block.2.layer.1.DenseReluDense.wo Linear
1.01e-06 6.44e+00 weight
0.00e+00 9.74e+03 input[0]
3.18e-04 6.27e+04 output
encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense
1.79e-06 4.65e+00 input[0]
3.18e-04 6.27e+04 output
encoder.block.2.layer.1.dropout Dropout
3.18e-04 6.27e+04 input[0]
0.00e+00 inf output
The example output has been trimmed in the middle for brevity.
The second column shows the value of the absolute largest element, so if you have a closer look at the last few frames, the inputs and outputs were in the range of 1e4. So when this training was done under fp16 mixed precision the very last step overflowed (since under fp16 the largest number before inf is 64e3). To avoid overflows under fp16 the activations must remain way below 1e4, because 1e4 * 1e4 = 1e8 so any matrix multiplication with large activations is going to lead to a numerical overflow condition.
At the very start of the trace you can discover at which batch number the problem occurred (here Detected inf/nan during batch_number=0 means the problem occurred on the first batch).
Each reported frame starts by declaring the fully qualified entry for the corresponding module this frame is reporting for. If we look just at this frame:
encoder.block.2.layer.1.layer_norm T5LayerNorm
8.69e-02 4.18e-01 weight
2.65e-04 3.42e+03 input[0]
1.79e-06 4.65e+00 output
Here, encoder.block.2.layer.1.layer_norm indicates that it was a layer norm for the first layer, of the second block of the encoder. And the specific calls of the forward is T5LayerNorm.
Let’s look at the last few frames of that report:
Detected inf/nan during batch_number=0
Last 21 forward frames:
abs min abs max metadata [...]
encoder.block.2.layer.1.DenseReluDense.wi_0 Linear
2.17e-07 4.50e+00 weight
1.79e-06 4.65e+00 input[0]
2.68e-06 3.70e+01 output
encoder.block.2.layer.1.DenseReluDense.wi_1 Linear
8.08e-07 2.66e+01 weight
1.79e-06 4.65e+00 input[0]
1.27e-04 2.37e+02 output
encoder.block.2.layer.1.DenseReluDense.wo Linear
1.01e-06 6.44e+00 weight
0.00e+00 9.74e+03 input[0]
3.18e-04 6.27e+04 output
encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense
1.79e-06 4.65e+00 input[0]
3.18e-04 6.27e+04 output
encoder.block.2.layer.1.dropout Dropout
3.18e-04 6.27e+04 input[0]
0.00e+00 inf output
The last frame reports for Dropout.forward function with the first entry for the only input and the second for the only output. You can see that it was called from an attribute dropout inside DenseReluDense class. We can see that it happened during the first layer, of the 2nd block, during the very first batch. Finally, the absolute largest input elements was 6.27e+04 and same for the output was inf.
You can see here, that T5DenseGatedGeluDense.forward resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16’s top limit of 64K. In the next frame we have Dropout which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overflow (inf).
As you can see it’s the previous frames that we need to look into when the numbers start going into very large for fp16 numbers.
Let’s match the report to the code from models/t5/modeling_t5.py:
class T5DenseGatedGeluDense(nn.Module):
def __init__(self, config):
super().__init__()
self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False)
self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False)
self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)
self.dropout = nn.Dropout(config.dropout_rate)
self.gelu_act = ACT2FN["gelu_new"]
def forward(self, hidden_states):
hidden_gelu = self.gelu_act(self.wi_0(hidden_states))
hidden_linear = self.wi_1(hidden_states)
hidden_states = hidden_gelu * hidden_linear
hidden_states = self.dropout(hidden_states)
hidden_states = self.wo(hidden_states)
return hidden_states
Now it’s easy to see the dropout call, and all the previous calls as well.
Since the detection is happening in a forward hook, these reports are printed immediately after each forward returns.
Going back to the full report, to act on it and to fix the problem, we need to go a few frames up where the numbers started to go up and most likely switch to the fp32 mode here, so that the numbers don’t overflow when multiplied or summed up. Of course, there might be other solutions. For example, we could turn off amp temporarily if it’s enabled, after moving the original forward into a helper wrapper, like so:
def _forward(self, hidden_states):
hidden_gelu = self.gelu_act(self.wi_0(hidden_states))
hidden_linear = self.wi_1(hidden_states)
hidden_states = hidden_gelu * hidden_linear
hidden_states = self.dropout(hidden_states)
hidden_states = self.wo(hidden_states)
return hidden_states
import torch
def forward(self, hidden_states):
if torch.is_autocast_enabled():
with torch.cuda.amp.autocast(enabled=False):
return self._forward(hidden_states)
else:
return self._forward(hidden_states)
Since the automatic detector only reports on inputs and outputs of full frames, once you know where to look, you may want to analyse the intermediary stages of any specific forward function as well. In such a case you can use the detect_overflow helper function to inject the detector where you want it, for example:
from debug_utils import detect_overflow
class T5LayerFF(nn.Module):
[...]
def forward(self, hidden_states):
forwarded_states = self.layer_norm(hidden_states)
detect_overflow(forwarded_states, "after layer_norm")
forwarded_states = self.DenseReluDense(forwarded_states)
detect_overflow(forwarded_states, "after DenseReluDense")
return hidden_states + self.dropout(forwarded_states)
You can see that we added 2 of these and now we track if inf or nan for forwarded_states was detected somewhere in between.
Actually, the detector already reports these because each of the calls in the example above is a nn.Module, but let’s say if you had some local direct calculations this is how you’d do that.
Additionally, if you’re instantiating the debugger in your own code, you can adjust the number of frames printed from its default, e.g.:
from transformers.debug_utils import DebugUnderflowOverflow
debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100)
Specific batch absolute min and max value tracing
The same debugging class can be used for per-batch tracing with the underflow/overflow detection feature turned off.
Let’s say you want to watch the absolute min and max values for all the ingredients of each forward call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as:
debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3])
And now full batches 1 and 3 will be traced using the same format as the underflow/overflow detector does.
Batches are 0-indexed.
This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Here is a sample truncated output for such configuration:
*** Starting batch number=1 ***
abs min abs max metadata
shared Embedding
1.01e-06 7.92e+02 weight
0.00e+00 2.47e+04 input[0]
5.36e-05 7.92e+02 output
[...]
decoder.dropout Dropout
1.60e-07 2.27e+01 input[0]
0.00e+00 2.52e+01 output
decoder T5Stack
not a tensor output
lm_head Linear
1.01e-06 7.92e+02 weight
0.00e+00 1.11e+00 input[0]
6.06e-02 8.39e+01 output
T5ForConditionalGeneration
not a tensor output
*** Starting batch number=3 ***
abs min abs max metadata
shared Embedding
1.01e-06 7.92e+02 weight
0.00e+00 2.78e+04 input[0]
5.36e-05 7.92e+02 output
[...]
Here you will get a huge number of frames dumped - as many as there were forward calls in your model, so it may or may not what you want, but sometimes it can be easier to use for debugging purposes than a normal debugger. For example, if a problem starts happening at batch number 150. So you can dump traces for batches 149 and 150 and compare where numbers started to diverge.
You can also specify the batch number after which to stop the training, with:
debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) |
https://huggingface.co/docs/transformers/model_doc/bert-generation | BertGeneration
Overview
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using EncoderDecoderModel as proposed in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
The abstract from the paper is the following:
Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation, Text Summarization, Sentence Splitting, and Sentence Fusion.
Usage:
The model can be used in combination with the EncoderDecoderModel to leverage two pretrained BERT checkpoints for subsequent fine-tuning.
>>>
>>>
>>> encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102)
>>>
>>> decoder = BertGenerationDecoder.from_pretrained(
... "bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
... )
>>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
>>>
>>> tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
>>> input_ids = tokenizer(
... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
... ).input_ids
>>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
>>>
>>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
>>> loss.backward()
Pretrained EncoderDecoderModel are also directly available in the model hub, e.g.,
>>>
>>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> input_ids = tokenizer(
... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt"
... ).input_ids
>>> outputs = sentence_fuser.generate(input_ids)
>>> print(tokenizer.decode(outputs[0]))
Tips:
BertGenerationEncoder and BertGenerationDecoder should be used in combination with EncoderDecoder.
For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input. Therefore, no EOS token should be added to the end of the input.
This model was contributed by patrickvonplaten. The original code can be found here.
BertGenerationConfig
class transformers.BertGenerationConfig
< source >
( vocab_size = 50358 hidden_size = 1024 num_hidden_layers = 24 num_attention_heads = 16 intermediate_size = 4096 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 bos_token_id = 2 eos_token_id = 1 position_embedding_type = 'absolute' use_cache = True **kwargs )
Parameters
vocab_size (int, optional, defaults to 50358) — Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BertGeneration.
hidden_size (int, optional, defaults to 1024) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often called feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.
This is the configuration class to store the configuration of a BertGenerationPreTrainedModel. It is used to instantiate a BertGeneration model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BertGeneration google/bert_for_seq_generation_L-24_bbc_encoder architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import BertGenerationConfig, BertGenerationEncoder
>>>
>>> configuration = BertGenerationConfig()
>>>
>>> model = BertGenerationEncoder(configuration)
>>>
>>> configuration = model.config
BertGenerationTokenizer
class transformers.BertGenerationTokenizer
< source >
( vocab_file bos_token = '<s>' eos_token = '</s>' unk_token = '<unk>' pad_token = '<pad>' sep_token = '<::::>' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs )
Parameters
vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
bos_token (str, optional, defaults to "<s>") — The begin of sequence token.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout.
Construct a BertGeneration tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
save_vocabulary
< source >
( save_directory: str filename_prefix: typing.Optional[str] = None )
BertGenerationEncoder
class transformers.BertGenerationEncoder
< source >
( config )
Parameters
config (BertGenerationConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare BertGeneration model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
This model should be used when leveraging Bert or Roberta checkpoints for the EncoderDecoderModel class as described in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, and Aliaksei Severyn.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertGenerationConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
The BertGenerationEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertGenerationEncoder
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
>>> model = BertGenerationEncoder.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
BertGenerationDecoder
class transformers.BertGenerationDecoder
< source >
( config )
Parameters
config (BertGenerationConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
BertGeneration Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertGenerationConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The BertGenerationDecoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertGenerationDecoder, BertGenerationConfig
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
>>> config = BertGenerationConfig.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
>>> config.is_decoder = True
>>> model = BertGenerationDecoder.from_pretrained(
... "google/bert_for_seq_generation_L-24_bbc_encoder", config=config
... )
>>> inputs = tokenizer("Hello, my dog is cute", return_token_type_ids=False, return_tensors="pt")
>>> outputs = model(**inputs)
>>> prediction_logits = outputs.logits |
https://huggingface.co/docs/transformers/perf_torch_compile | Optimize inference using torch.compile()
This guide aims to provide a benchmark on the inference speed-ups introduced with torch.compile() for computer vision models in 🤗 Transformers.
Benefits of torch.compile
Depending on the model and the GPU, `torch.compile()` yields up to 30% speed-up during inference. To use `torch.compile()`, simply install any version of `torch` above 2.0.
Compiling a model takes time, so it’s useful if you are compiling the model only once instead of every time you infer. To compile any computer vision model of your choice, call torch.compile() on the model as shown below:
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda")
+ model = torch.compile(model)
compile() comes with multiple modes for compiling, which essentially differ in compilation time and inference overhead. max-autotune takes longer than reduce-overhead but results in faster inference. Default mode is fastest for compilation but is not as efficient compared to reduce-overhead for inference time. In this guide, we used the default mode. You can learn more about it here.
We benchmarked torch.compile with different computer vision models, tasks, types of hardware, and batch sizes on torch version 2.0.1.
Benchmarking code
Below you can find the benchmarking code for each task. We warm up the GPU before inference and take the mean time of 300 inferences, using the same image each time.
Image Classification with ViT
import torch
from PIL import Image
import requests
import numpy as np
from transformers import AutoImageProcessor, AutoModelForImageClassification
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224").to("cuda")
model = torch.compile(model)
processed_input = processor(image, return_tensors='pt').to(device="cuda")
with torch.no_grad():
_ = model(**processed_input)
Object Detection with DETR
from transformers import AutoImageProcessor, AutoModelForObjectDetection
processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50").to("cuda")
model = torch.compile(model)
texts = ["a photo of a cat", "a photo of a dog"]
inputs = processor(text=texts, images=image, return_tensors="pt").to("cuda")
with torch.no_grad():
_ = model(**inputs)
Image Segmentation with Segformer
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512").to("cuda")
model = torch.compile(model)
seg_inputs = processor(images=image, return_tensors="pt").to("cuda")
with torch.no_grad():
_ = model(**seg_inputs)
Below you can find the list of the models we benchmarked.
Image Classification
google/vit-base-patch16-224
microsoft/beit-base-patch16-224-pt22k-ft22k
facebook/convnext-large-224
microsoft/resnet-50
Image Segmentation
nvidia/segformer-b0-finetuned-ade-512-512
facebook/mask2former-swin-tiny-coco-panoptic
facebook/maskformer-swin-base-ade
google/deeplabv3_mobilenet_v2_1.0_513
Object Detection
google/owlvit-base-patch32
facebook/detr-resnet-101
microsoft/conditional-detr-resnet-50
Below you can find visualization of inference durations with and without torch.compile() and percentage improvements for each model in different hardware and batch sizes.
Below you can find inference durations in milliseconds for each model with and without compile(). Note that OwlViT results in OOM in larger batch sizes.
A100 (batch size: 1)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 9.325 7.584
Image Segmentation/Segformer 11.759 10.500
Object Detection/OwlViT 24.978 18.420
Image Classification/BeiT 11.282 8.448
Object Detection/DETR 34.619 19.040
Image Classification/ConvNeXT 10.410 10.208
Image Classification/ResNet 6.531 4.124
Image Segmentation/Mask2former 60.188 49.117
Image Segmentation/Maskformer 75.764 59.487
Image Segmentation/MobileNet 8.583 3.974
Object Detection/Resnet-101 36.276 18.197
Object Detection/Conditional-DETR 31.219 17.993
A100 (batch size: 4)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 14.832 14.499
Image Segmentation/Segformer 18.838 16.476
Image Classification/BeiT 13.205 13.048
Object Detection/DETR 48.657 32.418
Image Classification/ConvNeXT 22.940 21.631
Image Classification/ResNet 6.657 4.268
Image Segmentation/Mask2former 74.277 61.781
Image Segmentation/Maskformer 180.700 159.116
Image Segmentation/MobileNet 14.174 8.515
Object Detection/Resnet-101 68.101 44.998
Object Detection/Conditional-DETR 56.470 35.552
A100 (batch size: 16)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 40.944 40.010
Image Segmentation/Segformer 37.005 31.144
Image Classification/BeiT 41.854 41.048
Object Detection/DETR 164.382 161.902
Image Classification/ConvNeXT 82.258 75.561
Image Classification/ResNet 7.018 5.024
Image Segmentation/Mask2former 178.945 154.814
Image Segmentation/Maskformer 638.570 579.826
Image Segmentation/MobileNet 51.693 30.310
Object Detection/Resnet-101 232.887 155.021
Object Detection/Conditional-DETR 180.491 124.032
V100 (batch size: 1)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 10.495 6.00
Image Segmentation/Segformer 13.321 5.862
Object Detection/OwlViT 25.769 22.395
Image Classification/BeiT 11.347 7.234
Object Detection/DETR 33.951 19.388
Image Classification/ConvNeXT 11.623 10.412
Image Classification/ResNet 6.484 3.820
Image Segmentation/Mask2former 64.640 49.873
Image Segmentation/Maskformer 95.532 72.207
Image Segmentation/MobileNet 9.217 4.753
Object Detection/Resnet-101 52.818 28.367
Object Detection/Conditional-DETR 39.512 20.816
V100 (batch size: 4)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 15.181 14.501
Image Segmentation/Segformer 16.787 16.188
Image Classification/BeiT 15.171 14.753
Object Detection/DETR 88.529 64.195
Image Classification/ConvNeXT 29.574 27.085
Image Classification/ResNet 6.109 4.731
Image Segmentation/Mask2former 90.402 76.926
Image Segmentation/Maskformer 234.261 205.456
Image Segmentation/MobileNet 24.623 14.816
Object Detection/Resnet-101 134.672 101.304
Object Detection/Conditional-DETR 97.464 69.739
V100 (batch size: 16)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 52.209 51.633
Image Segmentation/Segformer 61.013 55.499
Image Classification/BeiT 53.938 53.581
Object Detection/DETR OOM OOM
Image Classification/ConvNeXT 109.682 100.771
Image Classification/ResNet 14.857 12.089
Image Segmentation/Mask2former 249.605 222.801
Image Segmentation/Maskformer 831.142 743.645
Image Segmentation/MobileNet 93.129 55.365
Object Detection/Resnet-101 482.425 361.843
Object Detection/Conditional-DETR 344.661 255.298
T4 (batch size: 1)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 16.520 15.786
Image Segmentation/Segformer 16.116 14.205
Object Detection/OwlViT 53.634 51.105
Image Classification/BeiT 16.464 15.710
Object Detection/DETR 73.100 53.99
Image Classification/ConvNeXT 32.932 30.845
Image Classification/ResNet 6.031 4.321
Image Segmentation/Mask2former 79.192 66.815
Image Segmentation/Maskformer 200.026 188.268
Image Segmentation/MobileNet 18.908 11.997
Object Detection/Resnet-101 106.622 82.566
Object Detection/Conditional-DETR 77.594 56.984
T4 (batch size: 4)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 43.653 43.626
Image Segmentation/Segformer 45.327 42.445
Image Classification/BeiT 52.007 51.354
Object Detection/DETR 277.850 268.003
Image Classification/ConvNeXT 119.259 105.580
Image Classification/ResNet 13.039 11.388
Image Segmentation/Mask2former 201.540 184.670
Image Segmentation/Maskformer 764.052 711.280
Image Segmentation/MobileNet 74.289 48.677
Object Detection/Resnet-101 421.859 357.614
Object Detection/Conditional-DETR 289.002 226.945
T4 (batch size: 16)
Task/Model torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ViT 163.914 160.907
Image Segmentation/Segformer 192.412 163.620
Image Classification/BeiT 188.978 187.976
Object Detection/DETR OOM OOM
Image Classification/ConvNeXT 422.886 388.078
Image Classification/ResNet 44.114 37.604
Image Segmentation/Mask2former 756.337 695.291
Image Segmentation/Maskformer 2842.940 2656.88
Image Segmentation/MobileNet 299.003 201.942
Object Detection/Resnet-101 1619.505 1262.758
Object Detection/Conditional-DETR 1137.513 897.390
PyTorch Nightly
We also benchmarked on PyTorch nightly (2.1.0dev, find the wheel [here](https://download.pytorch.org/whl/nightly/cu118)) and observed improvement in latency both for uncompiled and compiled models.
A100
Task/Model Batch Size torch 2.0 - no compile torch 2.0 -
compile
Image Classification/BeiT Unbatched 12.462 6.954
Image Classification/BeiT 4 14.109 12.851
Image Classification/BeiT 16 42.179 42.147
Object Detection/DETR Unbatched 30.484 15.221
Object Detection/DETR 4 46.816 30.942
Object Detection/DETR 16 163.749 163.706
T4
Task/Model Batch Size torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/BeiT Unbatched 14.408 14.052
Image Classification/BeiT 4 47.381 46.604
Image Classification/BeiT 16 42.179 42.147
Object Detection/DETR Unbatched 68.382 53.481
Object Detection/DETR 4 269.615 204.785
Object Detection/DETR 16 OOM OOM
V100
Task/Model Batch Size torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/BeiT Unbatched 13.477 7.926
Image Classification/BeiT 4 15.103 14.378
Image Classification/BeiT 16 52.517 51.691
Object Detection/DETR Unbatched 28.706 19.077
Object Detection/DETR 4 88.402 62.949
Object Detection/DETR 16 OOM OOM
Reduce Overhead
We benchmarked `reduce-overhead` compilation mode for A100 and T4 in Nightly.
A100
Task/Model Batch Size torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ConvNeXT Unbatched 11.758 7.335
Image Classification/ConvNeXT 4 23.171 21.490
Image Classification/ResNet Unbatched 7.435 3.801
Image Classification/ResNet 4 7.261 2.187
Object Detection/Conditional-DETR Unbatched 32.823 11.627
Object Detection/Conditional-DETR 4 50.622 33.831
Image Segmentation/MobileNet Unbatched 9.869 4.244
Image Segmentation/MobileNet 4 14.385 7.946
T4
Task/Model Batch Size torch 2.0 -
no compile torch 2.0 -
compile
Image Classification/ConvNeXT Unbatched 32.137 31.84
Image Classification/ConvNeXT 4 120.944 110.209
Image Classification/ResNet Unbatched 9.761 7.698
Image Classification/ResNet 4 15.215 13.871
Object Detection/Conditional-DETR Unbatched 72.150 57.660
Object Detection/Conditional-DETR 4 301.494 247.543
Image Segmentation/MobileNet Unbatched 22.266 19.339
Image Segmentation/MobileNet 4 78.311 50.983 |
https://huggingface.co/docs/transformers/add_new_model | How to add a model to 🤗 Transformers?
The 🤗 Transformers library is often able to offer new models thanks to community contributors. But this can be a challenging project and requires an in-depth knowledge of the 🤗 Transformers library and the model to implement. At Hugging Face, we’re trying to empower more of the community to actively add models and we’ve put together this guide to walk you through the process of adding a PyTorch model (make sure you have PyTorch installed).
If you’re interested in implementing a TensorFlow model, take a look at the How to convert a 🤗 Transformers model to TensorFlow guide!
Along the way, you’ll:
get insights into open-source best practices
understand the design principles behind one of the most popular deep learning libraries
learn how to efficiently test large models
learn how to integrate Python utilities like black, ruff, and make fix-copies to ensure clean and readable code
A Hugging Face team member will be available to help you along the way so you’ll never be alone. 🤗 ❤️
To get started, open a New model addition issue for the model you want to see in 🤗 Transformers. If you’re not especially picky about contributing a specific model, you can filter by the New model label to see if there are any unclaimed model requests and work on it.
Once you’ve opened a new model request, the first step is to get familiar with 🤗 Transformers if you aren’t already!
General overview of 🤗 Transformers
First, you should get a general overview of 🤗 Transformers. 🤗 Transformers is a very opinionated library, so there is a chance that you don’t agree with some of the library’s philosophies or design choices. From our experience, however, we found that the fundamental design choices and philosophies of the library are crucial to efficiently scale 🤗 Transformers while keeping maintenance costs at a reasonable level.
A good first starting point to better understand the library is to read the documentation of our philosophy. As a result of our way of working, there are some choices that we try to apply to all models:
Composition is generally favored over-abstraction
Duplicating code is not always bad if it strongly improves the readability or accessibility of a model
Model files are as self-contained as possible so that when you read the code of a specific model, you ideally only have to look into the respective modeling_....py file.
In our opinion, the library’s code is not just a means to provide a product, e.g. the ability to use BERT for inference, but also as the very product that we want to improve. Hence, when adding a model, the user is not only the person who will use your model, but also everybody who will read, try to understand, and possibly tweak your code.
With this in mind, let’s go a bit deeper into the general library design.
Overview of models
To successfully add a model, it is important to understand the interaction between your model and its config, PreTrainedModel, and PretrainedConfig. For exemplary purposes, we will call the model to be added to 🤗 Transformers BrandNewBert.
Let’s take a look:
As you can see, we do make use of inheritance in 🤗 Transformers, but we keep the level of abstraction to an absolute minimum. There are never more than two levels of abstraction for any model in the library. BrandNewBertModel inherits from BrandNewBertPreTrainedModel which in turn inherits from PreTrainedModel and that’s it. As a general rule, we want to make sure that a new model only depends on PreTrainedModel. The important functionalities that are automatically provided to every new model are from_pretrained() and save_pretrained(), which are used for serialization and deserialization. All of the other important functionalities, such as BrandNewBertModel.forward should be completely defined in the new modeling_brand_new_bert.py script. Next, we want to make sure that a model with a specific head layer, such as BrandNewBertForMaskedLM does not inherit from BrandNewBertModel, but rather uses BrandNewBertModel as a component that can be called in its forward pass to keep the level of abstraction low. Every new model requires a configuration class, called BrandNewBertConfig. This configuration is always stored as an attribute in PreTrainedModel, and thus can be accessed via the config attribute for all classes inheriting from BrandNewBertPreTrainedModel:
model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert")
model.config
Similar to the model, the configuration inherits basic serialization and deserialization functionalities from PretrainedConfig. Note that the configuration and the model are always serialized into two different formats - the model to a pytorch_model.bin file and the configuration to a config.json file. Calling save_pretrained() will automatically call save_pretrained(), so that both model and configuration are saved.
Code style
When coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our own regarding how code should be written :-)
The forward pass of your model should be fully written in the modeling file while being fully independent of other models in the library. If you want to reuse a block from another model, copy the code and paste it with a # Copied from comment on top (see here for a good example and there for more documentation on Copied from).
The code should be fully understandable, even by a non-native English speaker. This means you should pick descriptive variable names and avoid abbreviations. As an example, activation is preferred to act. One-letter variable names are strongly discouraged unless it’s an index in a for loop.
More generally we prefer longer explicit code to short magical one.
Avoid subclassing nn.Sequential in PyTorch but subclass nn.Module and write the forward pass, so that anyone using your code can quickly debug it by adding print statements or breaking points.
Your function signature should be type-annotated. For the rest, good variable names are way more readable and understandable than type annotations.
Overview of tokenizers
Not quite ready yet :-( This section will be added soon!
Step-by-step recipe to add a model to 🤗 Transformers
Everyone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries of how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model:
Porting GPT2 Model by Thomas
Porting WMT19 MT Model by Stas
From experience, we can tell you that the most important things to keep in mind when adding a model are:
Don’t reinvent the wheel! Most parts of the code you will add for the new 🤗 Transformers model already exist somewhere in 🤗 Transformers. Take some time to find similar, already existing models and tokenizers you can copy from. grep and rg are your friends. Note that it might very well happen that your model’s tokenizer is based on one model implementation, and your model’s modeling code on another one. E.g. FSMT’s modeling code is based on BART, while FSMT’s tokenizer code is based on XLM.
It’s more of an engineering challenge than a scientific challenge. You should spend more time creating an efficient debugging environment rather than trying to understand all theoretical aspects of the model in the paper.
Ask for help, when you’re stuck! Models are the core component of 🤗 Transformers so we at Hugging Face are more than happy to help you at every step to add your model. Don’t hesitate to ask if you notice you are not making progress.
In the following, we try to give you a general recipe that we found most useful when porting a model to 🤗 Transformers.
The following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do List:
☐ (Optional) Understood the model’s theoretical aspects
☐ Prepared 🤗 Transformers dev environment
☐ Set up debugging environment of the original repository
☐ Created script that successfully runs the forward() pass using the original repository and checkpoint
☐ Successfully added the model skeleton to 🤗 Transformers
☐ Successfully converted original checkpoint to 🤗 Transformers checkpoint
☐ Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint
☐ Finished model tests in 🤗 Transformers
☐ Successfully added tokenizer in 🤗 Transformers
☐ Run end-to-end integration tests
☐ Finished docs
☐ Uploaded model weights to the Hub
☐ Submitted the pull request
☐ (Optional) Added a demo notebook
To begin with, we usually recommend starting by getting a good theoretical understanding of BrandNewBert. However, if you prefer to understand the theoretical aspects of the model on-the-job, then it is totally fine to directly dive into the BrandNewBert’s code-base. This option might suit you better if your engineering skills are better than your theoretical skill, if you have trouble understanding BrandNewBert’s paper, or if you just enjoy programming much more than reading scientific papers.
1. (Optional) Theoretical aspects of BrandNewBert
You should take some time to read BrandNewBert’s paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don’t worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in 🤗 Transformers. That being said, you don’t have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely:
What type of model is brand_new_bert? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like encoder-decoder model? Look at the model_summary if you’re not familiar with the differences between those.
What are the applications of brand_new_bert? Text classification? Text generation? Seq2Seq tasks, e.g., summarization?
What is the novel feature of the model that makes it different from BERT/GPT-2/BART?
Which of the already existing 🤗 Transformers models is most similar to brand_new_bert?
What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used for BERT or BART?
After you feel like you have gotten a good overview of the architecture of the model, you might want to write to the Hugging Face team with any questions you might have. This might include questions regarding the model’s architecture, its attention layer, etc. We will be more than happy to help you.
2. Next prepare your environment
Fork the repository by clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code under your GitHub user account.
Clone your transformers fork to your local disk, and add the base repository as a remote:
git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
Set up a development environment, for instance by running the following command:
python -m venv .env
source .env/bin/activate
pip install -e ".[dev]"
Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that’s the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do:
pip install -e ".[quality]"
which should be enough for most use cases. You can then return to the parent directory
We recommend adding the PyTorch version of brand_new_bert to Transformers. To install PyTorch, please follow the instructions on https://pytorch.org/get-started/locally/.
Note: You don’t need to have CUDA installed. Making the new model work on CPU is sufficient.
To port brand_new_bert, you will also need access to its original repository:
git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git
cd brand_new_bert
pip install -e .
Now you have set up a development environment to port brand_new_bert to 🤗 Transformers.
3.-4. Run a pretrained checkpoint using the original repository
At first, you will work on the original brand_new_bert repository. Often, the original implementation is very “researchy”. Meaning that documentation might be lacking and the code can be difficult to understand. But this should be exactly your motivation to reimplement brand_new_bert. At Hugging Face, one of our main goals is to make people stand on the shoulders of giants which translates here very well into taking a working model and rewriting it to make it as accessible, user-friendly, and beautiful as possible. This is the number-one motivation to re-implement models into 🤗 Transformers - trying to make complex new NLP technology accessible to everybody.
You should start thereby by diving into the original repository.
Successfully running the official pretrained model in the original repository is often the most difficult step. From our experience, it is very important to spend some time getting familiar with the original code-base. You need to figure out the following:
Where to find the pretrained weights?
How to load the pretrained weights into the corresponding model?
How to run the tokenizer independently from the model?
Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually, you only have to reimplement those functions.
Be able to locate the important components of the model: Where is the model’s class? Are there model sub-classes, e.g. EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers, e.g. self-attention, cross-attention…?
How can you debug the model in the original environment of the repo? Do you have to add print statements, can you work with an interactive debugger like ipdb, or should you use an efficient IDE to debug the model, like PyCharm?
It is very important that before you start the porting process, you can efficiently debug code in the original repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or even a pull request in the original repository. The maintainers of this repository are most likely very happy about someone looking into their code!
At this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to dive into the original repository and also when starting to write the 🤗 Transformers implementation of the model. Only at the very end, when the model has already been successfully ported to 🤗 Transformers, one should verify that the model also works as expected on GPU.
In general, there are two possible debugging environments for running the original model
Jupyter notebooks / google colab
Local python scripts.
Jupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also, notebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging Face team for help. If you are familiar with Jupyter notebooks, we strongly recommend you work with them.
The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend some time adjusting to the new programming environment and you might not be able to use your known debugging tools anymore, like ipdb.
For each code-base, a good first step is always to load a small pretrained checkpoint and to be able to reproduce a single forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in pseudocode):
model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = [0, 4, 5, 2, 3, 7, 9]
original_output = model.predict(input_ids)
Next, regarding the debugging strategy, there are generally a few from which to choose from:
Decompose the original model into many small testable components and run a forward pass on each of those for verification
Decompose the original model only into the original tokenizer and the original model, run a forward pass on those, and use intermediate print statements or breakpoints for verification
Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code base.
If the original code-base allows you to decompose the model into smaller sub-components, e.g. if the original code-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages to taking the more difficult road in the beginning:
at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically for each component individually that the corresponding component of the 🤗 Transformers implementation matches instead of relying on visual comparison via print statements
it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting individual components and thus structure your work better
separating the model into logical meaningful components will help you to get a better overview of the model’s design and thus to better understand the model
at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue changing your code
Lysandre’s integration checks for ELECTRA gives a nice example of how this can be done.
However, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode, it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good example is T5’s MeshTensorFlow library which is very complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one often relies on verifying print statements.
No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the starting layers first and the ending layers last.
It is recommended that you retrieve the output, either by print statements or sub-component functions, of the following layers in the following order:
Retrieve the input IDs passed to the model
Retrieve the word embeddings
Retrieve the input of the first Transformer layer
Retrieve the output of the first Transformer layer
Retrieve the output of the following n - 1 Transformer layers
Retrieve the output of the whole BrandNewBert Model
Input IDs should thereby consists of an array of integers, e.g. input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]
The outputs of the following layers often consist of multi-dimensional float arrays and can look like this:
We expect that every model added to 🤗 Transformers passes a couple of integration tests, meaning that the original model and the reimplemented version in 🤗 Transformers have to give the exact same output up to a precision of 0.001! Since it is normal that the exact same model written in different libraries can give a slightly different output depending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives nearly the same output, they have to be almost identical. Therefore, you will certainly compare the intermediate outputs of the 🤗 Transformers version multiple times against the intermediate outputs of the original implementation of brand_new_bert in which case an efficient debugging environment of the original repository is absolutely important. Here is some advice to make your debugging environment as efficient as possible.
Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should probably take the time to write a longer script that decomposes the original model into smaller sub-components to retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on TensorFlow print operations like tf.print to output intermediate values. Is the original repository written in Jax? Then make sure that the model is not jitted when running the forward pass, e.g. check-out this link.
Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds. In case only very large checkpoints are available, it might make more sense to create a dummy model in the new environment with randomly initialized weights and save those weights for comparison with the 🤗 Transformers version of your model
Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to find the function in the original repository that only calls a single forward pass, i.e. that is often called predict, evaluate, forward or __call__. You don’t want to debug a function that calls forward multiple times, e.g. to generate text, like autoregressive_sample, generate.
Try to separate the tokenization from the model’s forward pass. If the original repository shows examples where you have to input a string, then try to find out where in the forward call the string input is changed to input ids and start from this point. This might mean that you have to possibly write a small script yourself or change the original code so that you can directly input the ids instead of an input string.
Make sure that the model in your debugging setup is not in training mode, which often causes the model to yield random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging environment is deterministic so that the dropout layers are not used. Or use transformers.utils.set_seed if the old and new implementations are in the same framework.
The following section gives you more specific details/tips on how you can do this for brand_new_bert.
5.-14. Port BrandNewBert to 🤗 Transformers
Next, you can finally start adding new code to 🤗 Transformers. Go into the clone of your 🤗 Transformers’ fork:
In the special case that you are adding a model whose architecture exactly matches the model architecture of an existing model you only have to add a conversion script as described in this section. In this case, you can just re-use the whole model architecture of the already existing model.
Otherwise, let’s start generating a new model. You have two choices here:
transformers-cli add-new-model-like to add a new model like an existing one
transformers-cli add-new-model to add a new model from our template (will look like BERT or Bart depending on the type of model you select)
In both cases, you will be prompted with a questionnaire to fill in the basic information of your model. The second command requires to install cookiecutter, you can find more information on it here.
Open a Pull Request on the main huggingface/transformers repo
Before starting to adapt the automatically generated code, now is the time to open a “Work in progress (WIP)” pull request, e.g. “[WIP] Add brand_new_bert”, in 🤗 Transformers so that you and the Hugging Face team can work side-by-side on integrating the model into 🤗 Transformers.
You should do the following:
Create a branch with a descriptive name from your main branch
git checkout -b add_brand_new_bert
Commit the automatically generated code:
Fetch and rebase to current main
git fetch upstream
git rebase upstream/main
Push the changes to your account using:
git push -u origin a-descriptive-name-for-my-changes
Once you are satisfied, go to the webpage of your fork on GitHub. Click on “Pull request”. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes.
Change the PR into a draft by clicking on “Convert to draft” on the right of the GitHub pull request web page.
In the following, whenever you have made some progress, don’t forget to commit your work and push it to your account so that it shows in the pull request. Additionally, you should make sure to update your work with the current main from time to time by doing:
git fetch upstream
git merge upstream/main
In general, all questions you might have regarding the model or your implementation should be asked in your PR and discussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or if you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging Face team can efficiently understand your problem or question.
To do so, you can go to the “Files changed” tab where you see all of your changes, go to a line regarding which you want to ask a question, and click on the “+” symbol to add a comment. Whenever a question or problem has been solved, you can click on the “Resolve” button of the created comment.
In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions on GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the Hugging Face team by Slack or email.
5. Adapt the generated models code for brand_new_bert
At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be found in the generated files src/transformers/models/brand_new_bert/modeling_brand_new_bert.py and src/transformers/models/brand_new_bert/configuration_brand_new_bert.py.
Now you can finally start coding :). The generated code in src/transformers/models/brand_new_bert/modeling_brand_new_bert.py will either have the same architecture as BERT if it’s an encoder-only model or BART if it’s an encoder-decoder model. At this point, you should remind yourself what you’ve learned in the beginning about the theoretical aspects of the model: How is the model different from BERT or BART?”. Implement those changes which often means changing the self-attention layer, the order of the normalization layer, etc… Again, it is often useful to look at the similar architecture of already existing models in Transformers to get a better feeling of how your model should be implemented.
Note that at this point, you don’t have to be very sure that your code is fully correct or clean. Rather, it is advised to add a first unclean, copy-pasted version of the original code to src/transformers/models/brand_new_bert/modeling_brand_new_bert.py until you feel like all the necessary code is added. From our experience, it is much more efficient to quickly add a first version of the required code and improve/correct the code iteratively with the conversion script as described in the next section. The only thing that has to work at this point is that you can instantiate the 🤗 Transformers implementation of brand_new_bert, i.e. the following command should work:
from transformers import BrandNewBertModel, BrandNewBertConfig
model = BrandNewBertModel(BrandNewBertConfig())
The above command will create a model according to the default parameters as defined in BrandNewBertConfig() with random weights, thus making sure that the init() methods of all components works.
Note that all random initialization should happen in the _init_weights method of your BrandnewBertPreTrainedModel class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the BERT _init_weights method:
def _init_weights(self, module):
"""Initialize the weights"""
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
You can have some more custom schemes if you need a special initialization for some modules. For instance, in Wav2Vec2ForPreTraining, the last two linear layers need to have the initialization of the regular PyTorch nn.Linear but all the other ones should use an initialization as above. This is coded like this:
def _init_weights(self, module):
"""Initialize the weights"""
if isinstnace(module, Wav2Vec2ForPreTraining):
module.project_hid.reset_parameters()
module.project_q.reset_parameters()
module.project_hid._is_hf_initialized = True
module.project_q._is_hf_initialized = True
elif isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
The _is_hf_initialized flag is internally used to make sure we only initialize a submodule once. By setting it to True for module.project_q and module.project_hid, we make sure the custom initialization we did is not overridden later on, the _init_weights function won’t be applied to them.
6. Write a conversion script
Next, you should write a conversion script that lets you convert the checkpoint you used to debug brand_new_bert in the original repository to a checkpoint compatible with your just created 🤗 Transformers implementation of brand_new_bert. It is not advised to write the conversion script from scratch, but rather to look through already existing conversion scripts in 🤗 Transformers for one that has been used to convert a similar model that was written in the same framework as brand_new_bert. Usually, it is enough to copy an already existing conversion script and slightly adapt it for your use case. Don’t hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model.
If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT’s conversion script here
If you are porting a model from PyTorch to PyTorch, a good starting point might be BART’s conversion script here
In the following, we’ll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the name of a layer is defined by the name of the class attribute you give the layer. Let’s define a dummy model in PyTorch, called SimpleModel as follows:
from torch import nn
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.dense = nn.Linear(10, 10)
self.intermediate = nn.Linear(10, 10)
self.layer_norm = nn.LayerNorm(10)
Now we can create an instance of this model definition which will fill all weights: dense, intermediate, layer_norm with random weights. We can print the model to see its architecture
model = SimpleModel()
print(model)
This will print out the following:
SimpleModel(
(dense): Linear(in_features=10, out_features=10, bias=True)
(intermediate): Linear(in_features=10, out_features=10, bias=True)
(layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True)
)
We can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight values of a specific layer:
print(model.dense.weight.data)
to see that the weights were randomly initialized
tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212,
-0.2077, 0.2157],
[ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190,
0.2166, -0.0212],
[-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950,
-0.1023, -0.0447],
[-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415,
-0.1876, -0.2467],
[ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465,
0.2577, 0.0402],
[ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604,
0.2132, 0.1680],
[ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090,
0.2707, -0.2509],
[-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407,
0.1829, -0.1568],
[-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923,
0.0333, -0.0536],
[-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739,
0.2220, 0.2358]]).
In the conversion script, you should fill those randomly initialized weights with the exact weights of the corresponding layer in the checkpoint. E.g.
layer_name = "dense"
pretrained_weight = array_of_dense_layer
model_pointer = getattr(model, "dense")
model_pointer.weight.data = torch.from_numpy(pretrained_weight)
While doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding pretrained checkpoint weight exactly match in both shape and name. To do so, it is necessary to add assert statements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like:
assert (
model_pointer.weight.shape == pretrained_weight.shape
), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched"
Besides, you should also print out the names of both weights to make sure they match, e.g.
logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}")
If either the shape or the name doesn’t match, you probably assigned the wrong checkpoint weight to a randomly initialized layer of the 🤗 Transformers implementation.
An incorrect shape is most likely due to an incorrect setting of the config parameters in BrandNewBertConfig() that do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that PyTorch’s implementation of a layer requires the weight to be transposed beforehand.
Finally, you should also check that all required weights are initialized and print out all checkpoint weights that were not used for initialization to make sure the model is correctly converted. It is completely normal, that the conversion trials fail with either a wrong shape statement or a wrong name assignment. This is most likely because either you used incorrect parameters in BrandNewBertConfig(), have a wrong architecture in the 🤗 Transformers implementation, you have a bug in the init() functions of one of the components of the 🤗 Transformers implementation or you need to transpose one of the checkpoint weights.
This step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the Transformers model. Having correctly loaded the checkpoint into the 🤗 Transformers implementation, you can then save the model under a folder of your choice /path/to/converted/checkpoint/folder that should then contain both a pytorch_model.bin file and a config.json file:
model.save_pretrained("/path/to/converted/checkpoint/folder")
7. Implement the forward pass
Having managed to correctly load the pretrained weights into the 🤗 Transformers implementation, you should now make sure that the forward pass is correctly implemented. In Get familiar with the original repository, you have already created a script that runs a forward pass of the model using the original repository. Now you should write an analogous script using the 🤗 Transformers implementation instead of the original one. It should look as follows:
model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder")
input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]
output = model(input_ids).last_hidden_states
It is very likely that the 🤗 Transformers implementation and the original model implementation don’t give the exact same output the very first time or that the forward pass throws an error. Don’t be disappointed - it’s expected! First, you should make sure that the forward pass doesn’t throw any errors. It often happens that the wrong dimensions are used leading to a Dimensionality mismatch error or that the wrong data type object is used, e.g. torch.long instead of torch.float32. Don’t hesitate to ask the Hugging Face team for help, if you don’t manage to solve certain errors.
The final part to make sure the 🤗 Transformers implementation works correctly is to ensure that the outputs are equivalent to a precision of 1e-3. First, you should ensure that the output shapes are identical, i.e. outputs.shape should yield the same value for the script of the 🤗 Transformers implementation and the original implementation. Next, you should make sure that the output values are identical as well. This one of the most difficult parts of adding a new model. Common mistakes why the outputs are not identical are:
Some layers were not added, i.e. an activation layer was not added, or the residual connection was forgotten
The word embedding matrix was not tied
The wrong positional embeddings are used because the original implementation uses on offset
Dropout is applied during the forward pass. To fix this make sure model.training is False and that no dropout layer is falsely activated during the forward pass, i.e. pass self.training to PyTorch’s functional dropout
The best way to fix the problem is usually to look at the forward pass of the original implementation and the 🤗 Transformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out intermediate outputs of both implementations of the forward pass to find the exact position in the network where the 🤗 Transformers implementation shows a different output than the original implementation. First, make sure that the hard-coded input_ids in both scripts are identical. Next, verify that the outputs of the first transformation of the input_ids (usually the word embeddings) are identical. And then work your way up to the very last layer of the network. At some point, you will notice a difference between the two implementations, which should point you to the bug in the 🤗 Transformers implementation. From our experience, a simple and efficient way is to add many print statements in both the original implementation and 🤗 Transformers implementation, at the same positions in the network respectively, and to successively remove print statements showing the same values for intermediate presentations.
When you’re confident that both implementations yield the same output, verify the outputs with torch.allclose(original_output, output, atol=1e-3), you’re done with the most difficult part! Congratulations - the work left to be done should be a cakewalk 😊.
8. Adding all necessary model tests
At this point, you have successfully added a new model. However, it is very much possible that the model does not yet fully comply with the required design. To make sure, the implementation is fully compatible with 🤗 Transformers, all common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under the same tests/models/brand_new_bert/test_modeling_brand_new_bert.py. Run this test file to verify that all common tests pass:
pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py
Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that
a) The community can easily understand your work by looking at specific tests of brand_new_bert
b) Future changes to your model will not break any important feature of the model.
At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts you used earlier to implement the model to 🤗 Transformers. A template of those model tests has already added by the Cookiecutter, called BrandNewBertModelIntegrationTests and only has to be filled out by you. To ensure that those tests are passing, run
RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests
In case you are using Windows, you should replace RUN_SLOW=1 with SET RUN_SLOW=1
Second, all features that are special to brand_new_bert should be tested additionally in a separate test under BrandNewBertModelTester/`BrandNewBertModelTest. This part is often forgotten but is extremely useful in two ways:
It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the special features of brand_new_bert should work.
Future contributors can quickly test changes to the model by running those special tests.
9. Implement the tokenizer
Next, we should add the tokenizer of brand_new_bert. Usually, the tokenizer is equivalent to or very similar to an already existing tokenizer of 🤗 Transformers.
It is very important to find/extract the original tokenizer file and to manage to load this file into the 🤗 Transformers’ implementation of the tokenizer.
To ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository that inputs a string and returns the `input_ids“. It could look similar to this (in pseudo-code):
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/")
input_ids = model.tokenize(input_str)
You might have to take a deeper look again into the original repository to find the correct tokenizer function or you might even have to do changes to your clone of the original repository to only output the input_ids. Having written a functional tokenization script that uses the original repository, an analogous script for 🤗 Transformers should be created. It should look similar to this:
from transformers import BrandNewBertTokenizer
input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words."
tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/")
input_ids = tokenizer(input_str).input_ids
When both input_ids yield the same values, as a final step a tokenizer test file should also be added.
Analogous to the modeling test files of brand_new_bert, the tokenization test files of brand_new_bert should contain a couple of hard-coded integration tests.
10. Run End-to-end integration tests
Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the tokenizer to tests/models/brand_new_bert/test_modeling_brand_new_bert.py in 🤗 Transformers. Such a test should show on a meaningful text-to-text sample that the 🤗 Transformers implementation works as expected. A meaningful text-to-text sample can include e.g. a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etc… If none of the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a final step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can happen that you forgot to add some .to(self.device) statements to internal tensors of the model, which in such a test would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those tests for you.
11. Add Docstring
Now, all the necessary functionality for brand_new_bert is added - you’re almost done! The only thing left to add is a nice docstring and a doc page. The Cookiecutter should have added a template file called docs/source/model_doc/brand_new_bert.md that you should fill out. Users of your model will usually first look at this page before using your model. Hence, the documentation must be understandable and concise. It is very useful for the community to add some Tips to show how the model should be used. Don’t hesitate to ping the Hugging Face team regarding the docstrings.
Next, make sure that the docstring added to src/transformers/models/brand_new_bert/modeling_brand_new_bert.py is correct and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format here. It is always to good to remind oneself that documentation should be treated at least as carefully as the code in 🤗 Transformers since the documentation is usually the first contact point of the community with the model.
Code refactor
Great, now you have added all the necessary code for brand_new_bert. At this point, you should correct some potential incorrect code style by running:
and verify that your coding style passes the quality check:
There are a couple of other very strict design tests in 🤗 Transformers that might still be failing, which shows up in the tests of your pull request. This is often because of some missing information in the docstring or some incorrect naming. The Hugging Face team will surely help you if you’re stuck here.
Lastly, it is always a good idea to refactor one’s code after having ensured that the code works correctly. With all tests passing, now it’s a good time to go over the added code again and do some refactoring.
You have now finished the coding part, congratulation! 🎉 You are Awesome! 😎
12. Upload the models to the model hub
In this final part, you should convert and upload all checkpoints to the model hub and add a model card for each uploaded model checkpoint. You can get familiar with the hub functionalities by reading our Model sharing and uploading Page. You should work alongside the Hugging Face team here to decide on a fitting name for each checkpoint and to get the required access rights to be able to upload the model under the author’s organization of brand_new_bert. The push_to_hub method, present in all models in transformers, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below:
brand_new_bert.push_to_hub("brand_new_bert")
It is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the specific characteristics of this particular checkpoint, e.g. On which dataset was the checkpoint pretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to correctly use the model.
13. (Optional) Add notebook
It is very helpful to add a notebook that showcases in-detail how brand_new_bert can be used for inference and/or fine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community.
14. Submit your finished PR
You’re done programming now and can move to the last step, which is getting your PR merged into main. Usually, the Hugging Face team should have helped you already at this point, but it is worth taking some time to give your finished PR a nice description and eventually add comments to your code, if you want to point out certain design choices to your reviewer.
Share your work!!
Now, it’s time to get some credit from the community for your work! Having completed a model addition is a major contribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be used by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share your achievements with the community.
You have made another model that is super easy to access for everyone in the community! 🤯 |
https://huggingface.co/docs/transformers/contributing | Contribute to 🤗 Transformers
Everyone is welcome to contribute, and we value everybody’s contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable.
It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you.
However you choose to contribute, please be mindful and respect our code of conduct.
This guide was heavily inspired by the awesome scikit-learn guide to contributing.
Ways to contribute
There are several ways you can contribute to 🤗 Transformers:
Fix outstanding issues with the existing code.
Submit issues related to bugs or desired new features.
Implement new models.
Contribute to the examples or to the documentation.
If you don’t know where to start, there is a special Good First Issue listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source. Just comment in the issue that you’d like to work on it.
For something slightly more challenging, you can also take a look at the Good Second Issue list. In general though, if you feel like you know what you’re doing, go for it and we’ll help you get there! 🚀
All contributions are equally valuable to the community. 🥰
Fixing outstanding issues
If you notice an issue with the existing code and have a fix in mind, feel free to start contributing and open a Pull Request!
Submitting a bug-related issue or feature request
Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback.
Did you find a bug?
The 🤗 Transformers library is robust and reliable thanks to users who report the problems they encounter.
Before you report an issue, we would really appreciate it if you could make sure the bug was not already reported (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you’re unsure whether the bug is in your code or the library, please ask on the forum first. This helps us respond quicker to fixing issues related to the library versus general questions.
Once you’ve confirmed the bug hasn’t already been reported, please include the following information in your issue so we can quickly resolve it:
Your OS type and version and Python, PyTorch and TensorFlow versions when applicable.
A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s.
The full traceback if an exception is raised.
Attach any other additional information, like screenshots, you think may help.
To get the OS and software versions automatically, run the following command:
You can also run the same command from the root of the repository:
python src/transformers/commands/transformers_cli.py env
Do you want a new feature?
If there is a new feature you’d like to see in 🤗 Transformers, please open an issue and describe:
What is the motivation behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community?
Whatever it is, we’d love to hear about it!
Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we’ll be able to help you.
Provide a code snippet that demonstrates the features usage.
If the feature is related to a paper, please include a link.
If your issue is well written we’re already 80% of the way there by the time you create it.
We have added templates to help you get started with your issue.
Do you want to implement a new model?
New models are constantly released and if you want to implement a new model, please provide the following information
A short description of the model and link to the paper.
Link to the implementation if it is open-sourced.
Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can help you add it to 🤗 Transformers!
We have added a detailed guide and templates to help you get started with adding a new model, and we also have a more technical guide for how to add a model to 🤗 Transformers.
Do you want to add documentation?
We’re always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We’ll be happy to make the changes or help you make a contribution if you’re interested!
For more details about how to generate, build, and write the documentation, take a look at the documentation README.
Create a Pull Request
Before writing any code, we strongly advise you to search through the existing PRs or issues to make sure nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback.
You will need basic git proficiency to contribute to 🤗 Transformers. While git is not the easiest tool to use, it has the greatest manual. Type git --help in a shell and enjoy! If you prefer books, Pro Git is a very good reference.
You’ll need Python 3.8 or above to contribute to 🤗 Transformers. Follow the steps below to start contributing:
Fork the repository by clicking on the Fork button on the repository’s page. This creates a copy of the code under your GitHub user account.
Clone your fork to your local disk, and add the base repository as a remote:
git clone git@github.com:<your Github handle>/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
Create a new branch to hold your development changes:
git checkout -b a-descriptive-name-for-my-changes
🚨 Do not work on the main branch!
Set up a development environment by running the following command in a virtual environment:
If 🤗 Transformers was already installed in the virtual environment, remove it with pip uninstall transformers before reinstalling it in editable mode with the -e flag.
Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that’s the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do:
pip install -e ".[quality]"
which should be enough for most use cases.
Develop the features on your branch.
As you work on your code, you should make sure the test suite passes. Run the tests impacted by your changes like this:
pytest tests/<TEST_TO_RUN>.py
For more information about tests, check out the Testing guide.
🤗 Transformers relies on black and ruff to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can’t be automated in one go with:
This target is also optimized to only work with files modified by the PR you’re working on.
If you prefer to run the checks one after the other, the following command applies the style corrections:
🤗 Transformers also uses ruff and a few custom scripts to check for coding mistakes. Quality controls are run by the CI, but you can run the same checks with:
Finally, we have a lot of scripts to make sure we didn’t forget to update some files when adding a new model. You can run these scripts with:
To learn more about those checks and how to fix any issues with them, check out the Checks on a Pull Request guide.
If you’re modifying documents under docs/source directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check make sure you install the documentation builder:
Run the following command from the root of the repository:
doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build
This will build the documentation in the ~/tmp/test-build folder where you can inspect the generated Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request.
Once you’re happy with your changes, add changed files with git add and record your changes locally with git commit:
git add modified_file.py
git commit
Please remember to write good commit messages to clearly communicate the changes you made!
To keep your copy of the code up to date with the original repository, rebase your branch on upstream/branch before you open a pull request or if requested by a maintainer:
git fetch upstream
git rebase upstream/main
Push your changes to your branch:
git push -u origin a-descriptive-name-for-my-changes
If you’ve already opened a pull request, you’ll need to force push with the --force flag. Otherwise, if the pull request hasn’t been opened yet, you can just push your changes normally.
Now you can go to your fork of the repository on GitHub and click on Pull request to open a pull request. Make sure you tick off all the boxes in our checklist below. When you’re ready, you can send your changes to the project maintainers for review.
It’s ok if maintainers request changes, it happens to our core contributors too! So everyone can see the changes in the pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request.
Pull request checklist
☐ The pull request title should summarize your contribution.
☐ If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people viewing the issue know you are working on it).
☐ To indicate a work in progress please prefix the title with [WIP]. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.
☐ Make sure existing tests pass.
☐ If adding a new feature, also add tests for it.
If you are adding a new model, make sure you use ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...) to trigger the common tests.
If you are adding new @slow tests, make sure they pass using RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py.
If you are adding a new tokenizer, write tests and make sure RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py passes.
CircleCI does not run the slow tests, but GitHub Actions does every night!
☐ All public methods must have informative docstrings (see modeling_bert.py for an example).
☐ Due to the rapidly growing repository, don’t add any images, videos and other non-text files that’ll significantly weigh down the repository. Instead, use a Hub repository such as hf-internal-testing to host these files and reference them by URL. We recommend placing documentation related images in the following repository: huggingface/documentation-images. You can open a PR on this dataset repostitory and ask a Hugging Face member to merge it.
For more information about the checks run on a pull request, take a look at our Checks on a Pull Request guide.
Tests
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the tests folder and examples tests in the examples folder.
We like pytest and pytest-xdist because it’s faster. From the root of the repository, specify a path to a subfolder or a test file to run the test.
python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
Similarly, for the examples directory, specify a path to a subfolder or test file to run the test. For example, the following command tests the text classification subfolder in the PyTorch examples directory:
pip install -r examples/xxx/requirements.txt
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification
In fact, this is actually how our make test and make test-examples commands are implemented (not including the pip install)!
You can also specify a smaller set of tests in order to test only the feature you’re working on.
By default, slow tests are skipped but you can set the RUN_SLOW environment variable to yes to run them. This will download many gigabytes of models so make sure you have enough disk space, a good internet connection or a lot of patience!
Remember to specify a path to a subfolder or a test file to run the test. Otherwise, you’ll run all the tests in the tests or examples folder, which will take a very long time!
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification
Like the slow tests, there are other environment variables available which not enabled by default during testing:
RUN_CUSTOM_TOKENIZERS: Enables tests for custom tokenizers.
RUN_PT_FLAX_CROSS_TESTS: Enables tests for PyTorch + Flax integration.
RUN_PT_TF_CROSS_TESTS: Enables tests for TensorFlow + PyTorch integration.
More environment variables and additional information can be found in the testing_utils.py.
🤗 Transformers uses pytest as a test runner only. It doesn’t use any pytest-specific features in the test suite itself.
This means unittest is fully supported. Here’s how to run tests with unittest:
python -m unittest discover -s tests -t . -v
python -m unittest discover -s examples -t examples -v
Style guide
For documentation strings, 🤗 Transformers follows the Google Python Style Guide. Check our documentation writing guide for more information.
Develop on Windows
On Windows (unless you’re working in Windows Subsystem for Linux or WSL), you need to configure git to transform Windows CRLF line endings to Linux LF line endings:
git config core.autocrlf input
One way to run the make command on Windows is with MSYS2:
Download MSYS2, and we assume it’s installed in C:\msys64.
Open the command line C:\msys64\msys2.exe (it should be available from the Start menu).
Run in the shell: pacman -Syu and install make with pacman -S make.
Add C:\msys64\usr\bin to your PATH environment variable.
You can now use make from any terminal (Powershell, cmd.exe, etc.)! 🎉
Sync a forked repository with upstream main (the Hugging Face repository)
When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs.
When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
If a PR is absolutely necessary, use the following steps after checking out your branch:
git checkout -b your-branch-for-syncing
git pull --squash --no-commit upstream main
git commit -m '<your message without GitHub references>'
git push --set-upstream origin your-branch-for-syncing |
https://huggingface.co/docs/transformers/testing | Testing
Let’s take a look at how 🤗 Transformers models are tested and how you can write new tests and improve the existing ones.
There are 2 test suites in the repository:
tests — tests for the general API
examples — tests primarily for various applications that aren’t part of the API
How transformers are tested
Once a PR is submitted it gets tested with 9 CircleCi jobs. Every new commit to that PR gets retested. These jobs are defined in this config file, so that if needed you can reproduce the same environment on your machine.
These CI jobs don’t run @slow tests.
There are 3 jobs run by github actions:
torch hub integration: checks whether torch hub integration works.
self-hosted (push): runs fast tests on GPU only on commits on main. It only runs if a commit on main has updated the code in one of the following folders: src, tests, .github (to prevent running on added model cards, notebooks, etc.)
self-hosted runner: runs normal and slow tests on GPU in tests and examples:
RUN_SLOW=1 pytest tests/
RUN_SLOW=1 pytest examples/
The results can be observed here.
Running tests
Choosing which tests to run
This document goes into many details of how tests can be run. If after reading everything, you need even more details you will find them here.
Here are some most useful ways of running tests.
Run all:
or:
Note that the latter is defined as:
python -m pytest -n auto --dist=loadfile -s -v ./tests/
which tells pytest to:
run as many test processes as they are CPU cores (which could be too many if you don’t have a ton of RAM!)
ensure that all tests from the same file will be run by the same test process
do not capture output
run in verbose mode
Getting the list of all tests
All tests of the test suite:
All tests of a given test file:
pytest tests/test_optimization.py --collect-only -q
Run a specific test module
To run an individual test module:
pytest tests/utils/test_logging.py
Run specific tests
Since unittest is used inside most of the tests, to run specific subtests you need to know the name of the unittest class containing those tests. For example, it could be:
pytest tests/test_optimization.py::OptimizationTest::test_adam_w
Here:
tests/test_optimization.py - the file with tests
OptimizationTest - the name of the class
test_adam_w - the name of the specific test function
If the file contains multiple classes, you can choose to run only tests of a given class. For example:
pytest tests/test_optimization.py::OptimizationTest
will run all the tests inside that class.
As mentioned earlier you can see what tests are contained inside the OptimizationTest class by running:
pytest tests/test_optimization.py::OptimizationTest --collect-only -q
You can run tests by keyword expressions.
To run only tests whose name contains adam:
pytest -k adam tests/test_optimization.py
Logical and and or can be used to indicate whether all keywords should match or either. not can be used to negate.
To run all tests except those whose name contains adam:
pytest -k "not adam" tests/test_optimization.py
And you can combine the two patterns in one:
pytest -k "ada and not adam" tests/test_optimization.py
For example to run both test_adafactor and test_adam_w you can use:
pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py
Note that we use or here, since we want either of the keywords to match to include both.
If you want to include only tests that include both patterns, and is to be used:
pytest -k "test and ada" tests/test_optimization.py
Run accelerate tests
Sometimes you need to run accelerate tests on your models. For that you can just add -m accelerate_tests to your command, if let’s say you want to run these tests on OPT run:
RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py
Run documentation tests
In order to test whether the documentation examples are correct, you should check that the doctests are passing. As an example, let’s use WhisperModel.forward’s docstring:
r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```"""
Just run the following line to automatically test every docstring example in the desired file:
pytest --doctest-modules <path_to_file_or_dir>
If the file has a markdown extention, you should add the --doctest-glob="*.md" argument.
Run only modified tests
You can run the tests related to the unstaged files or the current branch (according to Git) by using pytest-picked. This is a great way of quickly testing your changes didn’t break anything, since it won’t run the tests related to files you didn’t touch.
pip install pytest-picked
All tests will be run from files and folders which are modified, but not yet committed.
Automatically rerun failed tests on source modification
pytest-xdist provides a very useful feature of detecting all failed tests, and then waiting for you to modify files and continuously re-rerun those failing tests until they pass while you fix them. So that you don’t need to re start pytest after you made the fix. This is repeated until all tests pass after which again a full run is performed.
To enter the mode: pytest -f or pytest --looponfail
File changes are detected by looking at looponfailroots root directories and all of their contents (recursively). If the default for this value does not work for you, you can change it in your project by setting a configuration option in setup.cfg:
[tool:pytest]
looponfailroots = transformers tests
or pytest.ini/tox.ini files:
[pytest]
looponfailroots = transformers tests
This would lead to only looking for file changes in the respective directories, specified relatively to the ini-file’s directory.
pytest-watch is an alternative implementation of this functionality.
Skip a test module
If you want to run all test modules, except a few you can exclude them by giving an explicit list of tests to run. For example, to run all except test_modeling_*.py tests:
pytest *ls -1 tests/*py | grep -v test_modeling*
Clearing state
CI builds and when isolation is important (against speed), cache should be cleared:
pytest --cache-clear tests
Running tests in parallel
As mentioned earlier make test runs tests in parallel via pytest-xdist plugin (-n X argument, e.g. -n 2 to run 2 parallel jobs).
pytest-xdist’s --dist= option allows one to control how the tests are grouped. --dist=loadfile puts the tests located in one file onto the same process.
Since the order of executed tests is different and unpredictable, if running the test suite with pytest-xdist produces failures (meaning we have some undetected coupled tests), use pytest-replay to replay the tests in the same order, which should help with then somehow reducing that failing sequence to a minimum.
Test order and repetition
It’s good to repeat the tests several times, in sequence, randomly, or in sets, to detect any potential inter-dependency and state-related bugs (tear down). And the straightforward multiple repetition is just good to detect some problems that get uncovered by randomness of DL.
Repeat tests
pytest-flakefinder:
pip install pytest-flakefinder
And then run every test multiple times (50 by default):
pytest --flake-finder --flake-runs=5 tests/test_failing_test.py
This plugin doesn’t work with -n flag from pytest-xdist.
There is another plugin pytest-repeat, but it doesn’t work with unittest.
Run tests in a random order
pip install pytest-random-order
Important: the presence of pytest-random-order will automatically randomize tests, no configuration change or command line options is required.
As explained earlier this allows detection of coupled tests - where one test’s state affects the state of another. When pytest-random-order is installed it will print the random seed it used for that session, e.g:
pytest tests
[...]
Using --random-order-bucket=module
Using --random-order-seed=573663
So that if the given particular sequence fails, you can reproduce it by adding that exact seed, e.g.:
pytest --random-order-seed=573663
[...]
Using --random-order-bucket=module
Using --random-order-seed=573663
It will only reproduce the exact order if you use the exact same list of tests (or no list at all). Once you start to manually narrowing down the list you can no longer rely on the seed, but have to list them manually in the exact order they failed and tell pytest to not randomize them instead using --random-order-bucket=none, e.g.:
pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py
To disable the shuffling for all tests:
pytest --random-order-bucket=none
By default --random-order-bucket=module is implied, which will shuffle the files on the module levels. It can also shuffle on class, package, global and none levels. For the complete details please see its documentation.
Another randomization alternative is: pytest-randomly. This module has a very similar functionality/interface, but it doesn’t have the bucket modes available in pytest-random-order. It has the same problem of imposing itself once installed.
Look and feel variations
pytest-sugar
pytest-sugar is a plugin that improves the look-n-feel, adds a progressbar, and show tests that fail and the assert instantly. It gets activated automatically upon installation.
To run tests without it, run:
or uninstall it.
Report each sub-test name and its progress
For a single or a group of tests via pytest (after pip install pytest-pspec):
pytest --pspec tests/test_optimization.py
Instantly shows failed tests
pytest-instafail shows failures and errors instantly instead of waiting until the end of test session.
pip install pytest-instafail
To GPU or not to GPU
On a GPU-enabled setup, to test in CPU-only mode add CUDA_VISIBLE_DEVICES="":
CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py
or if you have multiple gpus, you can specify which one is to be used by pytest. For example, to use only the second gpu if you have gpus 0 and 1, you can run:
CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py
This is handy when you want to run different tasks on different GPUs.
Some tests must be run on CPU-only, others on either CPU or GPU or TPU, yet others on multiple-GPUs. The following skip decorators are used to set the requirements of tests CPU/GPU/TPU-wise:
require_torch - this test will run only under torch
require_torch_gpu - as require_torch plus requires at least 1 GPU
require_torch_multi_gpu - as require_torch plus requires at least 2 GPUs
require_torch_non_multi_gpu - as require_torch plus requires 0 or 1 GPUs
require_torch_up_to_2_gpus - as require_torch plus requires 0 or 1 or 2 GPUs
require_torch_tpu - as require_torch plus requires at least 1 TPU
Let’s depict the GPU requirements in the following table:
| n gpus | decorator | |--------+--------------------------------| | >= 0 | @require_torch | | >= 1 | @require_torch_gpu | | >= 2 | @require_torch_multi_gpu | | < 2 | @require_torch_non_multi_gpu | | < 3 | @require_torch_up_to_2_gpus |
For example, here is a test that must be run only when there are 2 or more GPUs available and pytorch is installed:
@require_torch_multi_gpu
def test_example_with_multi_gpu():
If a test requires tensorflow use the require_tf decorator. For example:
@require_tf
def test_tf_thing_with_tensorflow():
These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is how to set it up:
@require_torch_gpu
@slow
def test_example_slow_on_gpu():
Some decorators like @parametrized rewrite test names, therefore @require_* skip decorators have to be listed last for them to work correctly. Here is an example of the correct usage:
@parameterized.expand(...)
@require_torch_multi_gpu
def test_integration_foo():
This order problem doesn’t exist with @pytest.mark.parametrize, you can put it first or last and it will still work. But it only works with non-unittests.
Inside tests:
How many GPUs are available:
from transformers.testing_utils import get_gpu_count
n_gpu = get_gpu_count()
Testing with a specific PyTorch backend or device
To run the test suite on a specific torch device add TRANSFORMERS_TEST_DEVICE="$device" where $device is the target backend. For example, to test on CPU only:
TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py
This variable is useful for testing custom or less common PyTorch backends such as mps. It can also be used to achieve the same effect as CUDA_VISIBLE_DEVICES by targeting specific GPUs or testing in CPU-only mode.
Certain devices will require an additional import after importing torch for the first time. This can be specified using the environment variable TRANSFORMERS_TEST_BACKEND:
TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py
Distributed training
pytest can’t deal with distributed training directly. If this is attempted - the sub-processes don’t do the right thing and end up thinking they are pytest and start running the test suite in loops. It works, however, if one spawns a normal process that then spawns off multiple workers and manages the IO pipes.
Here are some tests that use it:
test_trainer_distributed.py
test_deepspeed.py
To jump right into the execution point, search for the execute_subprocess_async call in those tests.
You will need at least 2 GPUs to see these tests in action:
CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py
Output capture
During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails, its according captured output will usually be shown along with the failure traceback.
To disable output capturing and to get the stdout and stderr normally, use -s or --capture=no:
pytest -s tests/utils/test_logging.py
To send test results to JUnit format output:
py.test tests --junitxml=result.xml
Color control
To have no color (e.g., yellow on white background is not readable):
pytest --color=no tests/utils/test_logging.py
Sending test report to online pastebin service
Creating a URL for each test failure:
pytest --pastebin=failed tests/utils/test_logging.py
This will submit test run information to a remote Paste service and provide a URL for each failure. You may select tests as usual or add for example -x if you only want to send one particular failure.
Creating a URL for a whole test session log:
pytest --pastebin=all tests/utils/test_logging.py
Writing tests
🤗 transformers tests are based on unittest, but run by pytest, so most of the time features from both systems can be used.
You can read here which features are supported, but the important thing to remember is that most pytest fixtures don’t work. Neither parametrization, but we use the module parameterized that works in a similar way.
Parametrization
Often, there is a need to run the same test multiple times, but with different arguments. It could be done from within the test, but then there is no way of running that test for just one set of arguments.
import unittest
from parameterized import parameterized
class TestMathUnitTest(unittest.TestCase):
@parameterized.expand( [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ] )
def test_floor(self, name, input, expected):
assert_equal(math.floor(input), expected)
Now, by default this test will be run 3 times, each time with the last 3 arguments of test_floor being assigned the corresponding arguments in the parameter list.
and you could run just the negative and integer sets of params with:
pytest -k "negative and integer" tests/test_mytest.py
or all but negative sub-tests, with:
pytest -k "not negative" tests/test_mytest.py
Besides using the -k filter that was just mentioned, you can find out the exact name of each sub-test and run any or all of them using their exact names.
pytest test_this1.py --collect-only -q
and it will list:
test_this1.py::TestMathUnitTest::test_floor_0_negative
test_this1.py::TestMathUnitTest::test_floor_1_integer
test_this1.py::TestMathUnitTest::test_floor_2_large_fraction
So now you can run just 2 specific sub-tests:
pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer
The module parameterized which is already in the developer dependencies of transformers works for both: unittests and pytest tests.
If, however, the test is not a unittest, you may use pytest.mark.parametrize (or you may see it being used in some existing tests, mostly under examples).
Here is the same example, this time using pytest’s parametrize marker:
import pytest
@pytest.mark.parametrize( "name, input, expected", [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ], )
def test_floor(name, input, expected):
assert_equal(math.floor(input), expected)
Same as with parameterized, with pytest.mark.parametrize you can have a fine control over which sub-tests are run, if the -k filter doesn’t do the job. Except, this parametrization function creates a slightly different set of names for the sub-tests. Here is what they look like:
pytest test_this2.py --collect-only -q
and it will list:
test_this2.py::test_floor[integer-1-1.0]
test_this2.py::test_floor[negative--1.5--2.0]
test_this2.py::test_floor[large fraction-1.6-1]
So now you can run just the specific test:
pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0]
as in the previous example.
Files and directories
In tests often we need to know where things are relative to the current test file, and it’s not trivial since the test could be invoked from more than one directory or could reside in sub-directories with different depths. A helper class transformers.test_utils.TestCasePlus solves this problem by sorting out all the basic paths and provides easy accessors to them:
pathlib objects (all fully resolved):
test_file_path - the current test file path, i.e. __file__
test_file_dir - the directory containing the current test file
tests_dir - the directory of the tests test suite
examples_dir - the directory of the examples test suite
repo_root_dir - the directory of the repository
src_dir - the directory of src (i.e. where the transformers sub-dir resides)
stringified paths---same as above but these return paths as strings, rather than pathlib objects:
test_file_path_str
test_file_dir_str
tests_dir_str
examples_dir_str
repo_root_dir_str
src_dir_str
To start using those all you need is to make sure that the test resides in a subclass of transformers.test_utils.TestCasePlus. For example:
from transformers.testing_utils import TestCasePlus
class PathExampleTest(TestCasePlus):
def test_something_involving_local_locations(self):
data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro"
If you don’t need to manipulate paths via pathlib or you just need a path as a string, you can always invoked str() on the pathlib object or use the accessors ending with _str. For example:
from transformers.testing_utils import TestCasePlus
class PathExampleTest(TestCasePlus):
def test_something_involving_stringified_locations(self):
examples_dir = self.examples_dir_str
Temporary files and directories
Using unique temporary files and directories are essential for parallel test running, so that the tests won’t overwrite each other’s data. Also we want to get the temporary files and directories removed at the end of each test that created them. Therefore, using packages like tempfile, which address these needs is essential.
However, when debugging tests, you need to be able to see what goes into the temporary file or directory and you want to know it’s exact path and not having it randomized on every test re-run.
A helper class transformers.test_utils.TestCasePlus is best used for such purposes. It’s a sub-class of unittest.TestCase, so we can easily inherit from it in the test modules.
Here is an example of its usage:
from transformers.testing_utils import TestCasePlus
class ExamplesTests(TestCasePlus):
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir()
This code creates a unique temporary directory, and sets tmp_dir to its location.
Create a unique temporary dir:
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir()
tmp_dir will contain the path to the created temporary dir. It will be automatically removed at the end of the test.
Create a temporary dir of my choice, ensure it’s empty before the test starts and don’t empty it after the test.
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir("./xxx")
This is useful for debug when you want to monitor a specific directory and want to make sure the previous tests didn’t leave any data in there.
You can override the default behavior by directly overriding the before and after args, leading to one of the following behaviors:
before=True: the temporary dir will always be cleared at the beginning of the test.
before=False: if the temporary dir already existed, any existing files will remain there.
after=True: the temporary dir will always be deleted at the end of the test.
after=False: the temporary dir will always be left intact at the end of the test.
In order to run the equivalent of rm -r safely, only subdirs of the project repository checkout are allowed if an explicit tmp_dir is used, so that by mistake no /tmp or similar important part of the filesystem will get nuked. i.e. please always pass paths that start with ./.
Each test can register multiple temporary directories and they all will get auto-removed, unless requested otherwise.
Temporary sys.path override
If you need to temporary override sys.path to import from another test for example, you can use the ExtendSysPath context manager. Example:
import os
from transformers.testing_utils import ExtendSysPath
bindir = os.path.abspath(os.path.dirname(__file__))
with ExtendSysPath(f"{bindir}/.."):
from test_trainer import TrainerIntegrationCommon
Skipping tests
This is useful when a bug is found and a new test is written, yet the bug is not fixed yet. In order to be able to commit it to the main repository we need make sure it’s skipped during make test.
Methods:
A skip means that you expect your test to pass only if some conditions are met, otherwise pytest should skip running the test altogether. Common examples are skipping windows-only tests on non-windows platforms, or skipping tests that depend on an external resource which is not available at the moment (for example a database).
A xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary.
One of the important differences between the two is that skip doesn’t run the test, and xfail does. So if the code that’s buggy causes some bad state that will affect other tests, do not use xfail.
Implementation
Here is how to skip whole test unconditionally:
@unittest.skip("this bug needs to be fixed")
def test_feature_x():
or via pytest:
@pytest.mark.skip(reason="this bug needs to be fixed")
or the xfail way:
@pytest.mark.xfail
def test_feature_x():
Here is how to skip a test based on some internal check inside the test:
def test_feature_x():
if not has_something():
pytest.skip("unsupported configuration")
or the whole module:
import pytest
if not pytest.config.getoption("--custom-flag"):
pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True)
or the xfail way:
def test_feature_x():
pytest.xfail("expected to fail until bug XYZ is fixed")
Here is how to skip all tests in a module if some import is missing:
docutils = pytest.importorskip("docutils", minversion="0.3")
Skip a test based on a condition:
@pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher")
def test_feature_x():
or:
@unittest.skipIf(torch_device == "cpu", "Can't do half precision")
def test_feature_x():
or skip the whole module:
@pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows")
class TestClass():
def test_feature_x(self):
More details, example and ways are here.
Slow tests
The library of tests is ever-growing, and some of the tests take minutes to run, therefore we can’t afford waiting for an hour for the test suite to complete on CI. Therefore, with some exceptions for essential tests, slow tests should be marked as in the example below:
from transformers.testing_utils import slow
@slow
def test_integration_foo():
Once a test is marked as @slow, to run such tests set RUN_SLOW=1 env var, e.g.:
Some decorators like @parameterized rewrite test names, therefore @slow and the rest of the skip decorators @require_* have to be listed last for them to work correctly. Here is an example of the correct usage:
@parameteriz ed.expand(...)
@slow
def test_integration_foo():
As explained at the beginning of this document, slow tests get to run on a scheduled basis, rather than in PRs CI checks. So it’s possible that some problems will be missed during a PR submission and get merged. Such problems will get caught during the next scheduled CI job. But it also means that it’s important to run the slow tests on your machine before submitting the PR.
Here is a rough decision making mechanism for choosing which tests should be marked as slow:
If the test is focused on one of the library’s internal components (e.g., modeling files, tokenization files, pipelines), then we should run that test in the non-slow test suite. If it’s focused on an other aspect of the library, such as the documentation or the examples, then we should run these tests in the slow test suite. And then, to refine this approach we should have exceptions:
All tests that need to download a heavy set of weights or a dataset that is larger than ~50MB (e.g., model or tokenizer integration tests, pipeline integration tests) should be set to slow. If you’re adding a new model, you should create and upload to the hub a tiny version of it (with random weights) for integration tests. This is discussed in the following paragraphs.
All tests that need to do a training not specifically optimized to be fast should be set to slow.
We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly slow, and set them to @slow. Auto-modeling tests, which save and load large files to disk, are a good example of tests that are marked as @slow.
If a test completes under 1 second on CI (including downloads if any) then it should be a normal test regardless.
Collectively, all the non-slow tests need to cover entirely the different internals, while remaining fast. For example, a significant coverage can be achieved by testing with specially created tiny models with random weights. Such models have the very minimal number of layers (e.g., 2), vocab size (e.g., 1000), etc. Then the @slow tests can use large slow models to do qualitative testing. To see the use of these simply look for tiny models with:
Here is a an example of a script that created the tiny model stas/tiny-wmt19-en-de. You can easily adjust it to your specific model’s architecture.
It’s easy to measure the run-time incorrectly if for example there is an overheard of downloading a huge model, but if you test it locally the downloaded files would be cached and thus the download time not measured. Hence check the execution speed report in CI logs instead (the output of pytest --durations=0 tests).
That report is also useful to find slow outliers that aren’t marked as such, or which need to be re-written to be fast. If you notice that the test suite starts getting slow on CI, the top listing of this report will show the slowest tests.
Testing the stdout/stderr output
In order to test functions that write to stdout and/or stderr, the test can access those streams using the pytest’s capsys system. Here is how this is accomplished:
import sys
def print_to_stdout(s):
print(s)
def print_to_stderr(s):
sys.stderr.write(s)
def test_result_and_stdout(capsys):
msg = "Hello"
print_to_stdout(msg)
print_to_stderr(msg)
out, err = capsys.readouterr()
sys.stdout.write(out)
sys.stderr.write(err)
assert msg in out
assert msg in err
And, of course, most of the time, stderr will come as a part of an exception, so try/except has to be used in such a case:
def raise_exception(msg):
raise ValueError(msg)
def test_something_exception():
msg = "Not a good value"
error = ""
try:
raise_exception(msg)
except Exception as e:
error = str(e)
assert msg in error, f"{msg} is in the exception:\n{error}"
Another approach to capturing stdout is via contextlib.redirect_stdout:
from io import StringIO
from contextlib import redirect_stdout
def print_to_stdout(s):
print(s)
def test_result_and_stdout():
msg = "Hello"
buffer = StringIO()
with redirect_stdout(buffer):
print_to_stdout(msg)
out = buffer.getvalue()
sys.stdout.write(out)
assert msg in out
An important potential issue with capturing stdout is that it may contain \r characters that in normal print reset everything that has been printed so far. There is no problem with pytest, but with pytest -s these characters get included in the buffer, so to be able to have the test run with and without -s, you have to make an extra cleanup to the captured output, using re.sub(r'~.*\r', '', buf, 0, re.M).
But, then we have a helper context manager wrapper to automatically take care of it all, regardless of whether it has some \r’s in it or not, so it’s a simple:
from transformers.testing_utils import CaptureStdout
with CaptureStdout() as cs:
function_that_writes_to_stdout()
print(cs.out)
Here is a full test example:
from transformers.testing_utils import CaptureStdout
msg = "Secret message\r"
final = "Hello World"
with CaptureStdout() as cs:
print(msg + final)
assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}"
If you’d like to capture stderr use the CaptureStderr class instead:
from transformers.testing_utils import CaptureStderr
with CaptureStderr() as cs:
function_that_writes_to_stderr()
print(cs.err)
If you need to capture both streams at once, use the parent CaptureStd class:
from transformers.testing_utils import CaptureStd
with CaptureStd() as cs:
function_that_writes_to_stdout_and_stderr()
print(cs.err, cs.out)
Also, to aid debugging test issues, by default these context managers automatically replay the captured streams on exit from the context.
Capturing logger stream
If you need to validate the output of a logger, you can use CaptureLogger:
from transformers import logging
from transformers.testing_utils import CaptureLogger
msg = "Testing 1, 2, 3"
logging.set_verbosity_info()
logger = logging.get_logger("transformers.models.bart.tokenization_bart")
with CaptureLogger(logger) as cl:
logger.info(msg)
assert cl.out, msg + "\n"
Testing with environment variables
If you want to test the impact of environment variables for a specific test you can use a helper decorator transformers.testing_utils.mockenv
from transformers.testing_utils import mockenv
class HfArgumentParserTest(unittest.TestCase):
@mockenv(TRANSFORMERS_VERBOSITY="error")
def test_env_override(self):
env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None)
At times an external program needs to be called, which requires setting PYTHONPATH in os.environ to include multiple local paths. A helper class transformers.test_utils.TestCasePlus comes to help:
from transformers.testing_utils import TestCasePlus
class EnvExampleTest(TestCasePlus):
def test_external_prog(self):
env = self.get_env()
Depending on whether the test file was under the tests test suite or examples it’ll correctly set up env[PYTHONPATH] to include one of these two directories, and also the src directory to ensure the testing is done against the current repo, and finally with whatever env[PYTHONPATH] was already set to before the test was called if anything.
This helper method creates a copy of the os.environ object, so the original remains intact.
Getting reproducible results
In some situations you may want to remove randomness for your tests. To get identical reproducible results set, you will need to fix the seed:
seed = 42
import random
random.seed(seed)
import torch
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
import numpy as np
np.random.seed(seed)
tf.random.set_seed(seed)
Debugging tests
To start a debugger at the point of the warning, do this:
pytest tests/utils/test_logging.py -W error::UserWarning --pdb
Working with github actions workflows
To trigger a self-push workflow CI job, you must:
Create a new branch on transformers origin (not a fork!).
The branch name has to start with either ci_ or ci- (main triggers it too, but we can’t do PRs on main). It also gets triggered only for specific paths - you can find the up-to-date definition in case it changed since this document has been written here under push:
Create a PR from this branch.
Then you can see the job appear here. It may not run right away if there is a backlog.
Testing Experimental CI Features
Testing CI features can be potentially problematic as it can interfere with the normal CI functioning. Therefore if a new CI feature is to be added, it should be done as following.
Create a new dedicated job that tests what needs to be tested
The new job must always succeed so that it gives us a green ✓ (details below).
Let it run for some days to see that a variety of different PR types get to run on it (user fork branches, non-forked branches, branches originating from github.com UI direct file edit, various forced pushes, etc. - there are so many) while monitoring the experimental job’s logs (not the overall job green as it’s purposefully always green)
When it’s clear that everything is solid, then merge the new changes into existing jobs.
That way experiments on CI functionality itself won’t interfere with the normal workflow.
Now how can we make the job always succeed while the new CI feature is being developed?
Some CIs, like TravisCI support ignore-step-failure and will report the overall job as successful, but CircleCI and Github Actions as of this writing don’t support that.
So the following workaround can be used:
set +euo pipefail at the beginning of the run command to suppress most potential failures in the bash script.
the last command must be a success: echo "done" or just true will do
Here is an example:
- run:
name: run CI experiment
command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures"
For simple commands you could also do:
cmd_that_may_fail || true
Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs, while removing set +euo pipefail or any other things you may have added to ensure that the experimental job doesn’t interfere with the normal CI functioning.
This whole process would have been much easier if we only could set something like allow-failure for the experimental step, and let it fail without impacting the overall status of PRs. But as mentioned earlier CircleCI and Github Actions don’t support it at the moment.
You can vote for this feature and see where it is at these CI-specific threads:
Github Actions:
CircleCI: |
https://huggingface.co/docs/transformers/add_new_pipeline | How to create a custom pipeline?
In this guide, we will see how to create a custom pipeline and share it on the Hub or add it to the 🤗 Transformers library.
First and foremost, you need to decide the raw entries the pipeline will be able to take. It can be strings, raw bytes, dictionaries or whatever seems to be the most likely desired input. Try to keep these inputs as pure Python as possible as it makes compatibility easier (even through other languages via JSON). Those will be the inputs of the pipeline (preprocess).
Then define the outputs. Same policy as the inputs. The simpler, the better. Those will be the outputs of postprocess method.
Start by inheriting the base class Pipeline with the 4 methods needed to implement preprocess, _forward, postprocess, and _sanitize_parameters.
from transformers import Pipeline
class MyPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "maybe_arg" in kwargs:
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
return preprocess_kwargs, {}, {}
def preprocess(self, inputs, maybe_arg=2):
model_input = Tensor(inputs["input_ids"])
return {"model_input": model_input}
def _forward(self, model_inputs):
outputs = self.model(**model_inputs)
return outputs
def postprocess(self, model_outputs):
best_class = model_outputs["logits"].softmax(-1)
return best_class
The structure of this breakdown is to support relatively seamless support for CPU/GPU, while supporting doing pre/postprocessing on the CPU on different threads
preprocess will take the originally defined inputs, and turn them into something feedable to the model. It might contain more information and is usually a Dict.
_forward is the implementation detail and is not meant to be called directly. forward is the preferred called method as it contains safeguards to make sure everything is working on the expected device. If anything is linked to a real model it belongs in the _forward method, anything else is in the preprocess/postprocess.
postprocess methods will take the output of _forward and turn it into the final output that was decided earlier.
_sanitize_parameters exists to allow users to pass any parameters whenever they wish, be it at initialization time pipeline(...., maybe_arg=4) or at call time pipe = pipeline(...); output = pipe(...., maybe_arg=4).
The returns of _sanitize_parameters are the 3 dicts of kwargs that will be passed directly to preprocess, _forward, and postprocess. Don’t fill anything if the caller didn’t call with any extra parameter. That allows to keep the default arguments in the function definition which is always more “natural”.
A classic example would be a top_k argument in the post processing in classification tasks.
>>> pipe = pipeline("my-new-task")
>>> pipe("This is a test")
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05}
{"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}]
>>> pipe("This is a test", top_k=2)
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}]
In order to achieve that, we’ll update our postprocess method with a default parameter to 5. and edit _sanitize_parameters to allow this new parameter.
def postprocess(self, model_outputs, top_k=5):
best_class = model_outputs["logits"].softmax(-1)
return best_class
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "maybe_arg" in kwargs:
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
postprocess_kwargs = {}
if "top_k" in kwargs:
postprocess_kwargs["top_k"] = kwargs["top_k"]
return preprocess_kwargs, {}, postprocess_kwargs
Try to keep the inputs/outputs very simple and ideally JSON-serializable as it makes the pipeline usage very easy without requiring users to understand new kinds of objects. It’s also relatively common to support many different types of arguments for ease of use (audio files, which can be filenames, URLs or pure bytes)
Adding it to the list of supported tasks
To register your new-task to the list of supported tasks, you have to add it to the PIPELINE_REGISTRY:
from transformers.pipelines import PIPELINE_REGISTRY
PIPELINE_REGISTRY.register_pipeline(
"new-task",
pipeline_class=MyPipeline,
pt_model=AutoModelForSequenceClassification,
)
You can specify a default model if you want, in which case it should come with a specific revision (which can be the name of a branch or a commit hash, here we took "abcdef") as well as the type:
PIPELINE_REGISTRY.register_pipeline(
"new-task",
pipeline_class=MyPipeline,
pt_model=AutoModelForSequenceClassification,
default={"pt": ("user/awesome_model", "abcdef")},
type="text",
)
Share your pipeline on the Hub
To share your custom pipeline on the Hub, you just have to save the custom code of your Pipeline subclass in a python file. For instance, let’s say we want to use a custom pipeline for sentence pair classification like this:
import numpy as np
from transformers import Pipeline
def softmax(outputs):
maxes = np.max(outputs, axis=-1, keepdims=True)
shifted_exp = np.exp(outputs - maxes)
return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True)
class PairClassificationPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "second_text" in kwargs:
preprocess_kwargs["second_text"] = kwargs["second_text"]
return preprocess_kwargs, {}, {}
def preprocess(self, text, second_text=None):
return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework)
def _forward(self, model_inputs):
return self.model(**model_inputs)
def postprocess(self, model_outputs):
logits = model_outputs.logits[0].numpy()
probabilities = softmax(logits)
best_class = np.argmax(probabilities)
label = self.model.config.id2label[best_class]
score = probabilities[best_class].item()
logits = logits.tolist()
return {"label": label, "score": score, "logits": logits}
The implementation is framework agnostic, and will work for PyTorch and TensorFlow models. If we have saved this in a file named pair_classification.py, we can then import it and register it like this:
from pair_classification import PairClassificationPipeline
from transformers.pipelines import PIPELINE_REGISTRY
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
PIPELINE_REGISTRY.register_pipeline(
"pair-classification",
pipeline_class=PairClassificationPipeline,
pt_model=AutoModelForSequenceClassification,
tf_model=TFAutoModelForSequenceClassification,
)
Once this is done, we can use it with a pretrained model. For instance sgugger/finetuned-bert-mrpc has been fine-tuned on the MRPC dataset, which classifies pairs of sentences as paraphrases or not.
from transformers import pipeline
classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc")
Then we can share it on the Hub by using the save_pretrained method in a Repository:
from huggingface_hub import Repository
repo = Repository("test-dynamic-pipeline", clone_from="{your_username}/test-dynamic-pipeline")
classifier.save_pretrained("test-dynamic-pipeline")
repo.push_to_hub()
This will copy the file where you defined PairClassificationPipeline inside the folder "test-dynamic-pipeline", along with saving the model and tokenizer of the pipeline, before pushing everything into the repository {your_username}/test-dynamic-pipeline. After that, anyone can use it as long as they provide the option trust_remote_code=True:
from transformers import pipeline
classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True)
Add the pipeline to 🤗 Transformers
If you want to contribute your pipeline to 🤗 Transformers, you will need to add a new module in the pipelines submodule with the code of your pipeline, then add it to the list of tasks defined in pipelines/__init__.py.
Then you will need to add tests. Create a new file tests/test_pipelines_MY_PIPELINE.py with examples of the other tests.
The run_pipeline_test function will be very generic and run on small random models on every possible architecture as defined by model_mapping and tf_model_mapping.
This is very important to test future compatibility, meaning if someone adds a new model for XXXForQuestionAnswering then the pipeline test will attempt to run on it. Because the models are random it’s impossible to check for actual values, that’s why there is a helper ANY that will simply attempt to match the output of the pipeline TYPE.
You also need to implement 2 (ideally 4) tests.
test_small_model_pt : Define 1 small model for this pipeline (doesn’t matter if the results don’t make sense) and test the pipeline outputs. The results should be the same as test_small_model_tf.
test_small_model_tf : Define 1 small model for this pipeline (doesn’t matter if the results don’t make sense) and test the pipeline outputs. The results should be the same as test_small_model_pt.
test_large_model_pt (optional): Tests the pipeline on a real pipeline where the results are supposed to make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make sure there is no drift in future releases.
test_large_model_tf (optional): Tests the pipeline on a real pipeline where the results are supposed to make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make sure there is no drift in future releases. |
https://huggingface.co/docs/transformers/model_doc/beit | BEiT
Overview
The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class of an image (as done in the original ViT paper), BEiT models are pre-trained to predict visual tokens from the codebook of OpenAI’s DALL-E model given masked patches.
The abstract from the paper is the following:
We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first “tokenize” the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K, significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains 86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%).
Tips:
BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They outperform both the original model (ViT) as well as Data-efficient Image Transformers (DeiT) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace ViTFeatureExtractor by BeitImageProcessor and ViTForImageClassification by BeitForImageClassification).
There’s also a demo notebook available which showcases how to combine DALL-E’s image tokenizer with BEiT for performing masked image modeling. You can find it here.
As the BEiT models expect each image to be of the same size (resolution), one can use BeitImageProcessor to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, microsoft/beit-base-patch16-224 refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the relative position bias among the several self-attention layers. During fine-tuning, each layer’s relative position bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to pre-train a model from scratch, one needs to either set the use_relative_position_bias or the use_relative_position_bias attribute of BeitConfig to True in order to add position embeddings.
BEiT pre-training. Taken from the original paper.
This model was contributed by nielsr. The JAX/FLAX version of this model was contributed by kamalkraj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT.
Image Classification
BeitForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Semantic segmentation
Semantic segmentation task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
BEiT specific outputs
class transformers.models.beit.modeling_beit.BeitModelOutputWithPooling
< source >
( last_hidden_state: FloatTensor = None pooler_output: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token will be returned.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Class for outputs of BeitModel.
class transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling
< source >
( last_hidden_state: Array = None pooler_output: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )
Parameters
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token will be returned.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Class for outputs of FlaxBeitModel.
BeitConfig
class transformers.BeitConfig
< source >
( vocab_size = 8192 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-12 image_size = 224 patch_size = 16 num_channels = 3 use_mask_token = False use_absolute_position_embeddings = False use_relative_position_bias = False use_shared_relative_position_bias = False layer_scale_init_value = 0.1 drop_path_rate = 0.1 use_mean_pooling = True out_indices = [3, 5, 7, 11] pool_scales = [1, 2, 3, 6] use_auxiliary_head = True auxiliary_loss_weight = 0.4 auxiliary_channels = 256 auxiliary_num_convs = 1 auxiliary_concat_input = False semantic_loss_ignore_index = 255 **kwargs )
Parameters
vocab_size (int, optional, defaults to 8092) — Vocabulary size of the BEiT model. Defines the number of different image tokens that can be used during pre-training.
hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) — The size (resolution) of each image.
patch_size (int, optional, defaults to 16) — The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) — The number of input channels.
use_mask_token (bool, optional, defaults to False) — Whether to use a mask token for masked image modeling.
use_absolute_position_embeddings (bool, optional, defaults to False) — Whether to use BERT-style absolute position embeddings.
use_relative_position_bias (bool, optional, defaults to False) — Whether to use T5-style relative position embeddings in the self-attention layers.
use_shared_relative_position_bias (bool, optional, defaults to False) — Whether to use the same relative position embeddings across all self-attention layers of the Transformer.
layer_scale_init_value (float, optional, defaults to 0.1) — Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale.
drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate per sample (when applied in the main path of residual layers).
use_mean_pooling (bool, optional, defaults to True) — Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the CLS token, before applying the classification head.
out_indices (List[int], optional, defaults to [3, 5, 7, 11]) — Indices of the feature maps to use for semantic segmentation.
pool_scales (Tuple[int], optional, defaults to [1, 2, 3, 6]) — Pooling scales used in Pooling Pyramid Module applied on the last feature map.
use_auxiliary_head (bool, optional, defaults to True) — Whether to use an auxiliary head during training.
auxiliary_loss_weight (float, optional, defaults to 0.4) — Weight of the cross-entropy loss of the auxiliary head.
auxiliary_channels (int, optional, defaults to 256) — Number of channels to use in the auxiliary head.
auxiliary_num_convs (int, optional, defaults to 1) — Number of convolutional layers to use in the auxiliary head.
auxiliary_concat_input (bool, optional, defaults to False) — Whether to concatenate the output of the auxiliary head with the input before the classification layer.
semantic_loss_ignore_index (int, optional, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a BeitModel. It is used to instantiate an BEiT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BEiT microsoft/beit-base-patch16-224-pt22k architecture.
Example:
>>> from transformers import BeitConfig, BeitModel
>>>
>>> configuration = BeitConfig()
>>>
>>> model = BeitModel(configuration)
>>>
>>> configuration = model.config
BeitFeatureExtractor
( images segmentation_maps = None **kwargs )
( outputs target_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation
Parameters
outputs (BeitForSemanticSegmentation) — Raw outputs of the model.
target_sizes (List[Tuple] of length batch_size, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
List[torch.Tensor] of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id.
Converts the output of BeitForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
BeitImageProcessor
class transformers.BeitImageProcessor
< source >
( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True crop_size: typing.Dict[str, int] = None rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_rescale: bool = True do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_reduce_labels: bool = False **kwargs )
Parameters
do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"height" -- 256, "width": 256}): Size of the output image after resizing. Can be overridden by the size parameter in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the preprocess method.
do_center_crop (bool, optional, defaults to True) — Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped. Can be overridden by the do_center_crop parameter in the preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}): Desired output size when applying center-cropping. Only has an effect if do_center_crop is set to True. Can be overridden by the crop_size parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method.
do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — The mean to use if normalizing the image. This is a float or list of floats of length of the number of channels of the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — The standard deviation to use if normalizing the image. This is a float or list of floats of length of the number of channels of the image. Can be overridden by the image_std parameter in the preprocess method.
do_reduce_labels (bool, optional, defaults to False) — Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255. Can be overridden by the do_reduce_labels parameter in the preprocess method.
Constructs a BEiT image processor.
preprocess
< source >
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None do_resize: bool = None size: typing.Dict[str, int] = None resample: Resampling = None do_center_crop: bool = None crop_size: typing.Dict[str, int] = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_reduce_labels: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs )
Parameters
images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing.
resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the image after center crop. If one edge the image is smaller than crop_size, it will be padded with zeros and then cropped
do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean.
image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation.
do_reduce_labels (bool, optional, defaults to self.do_reduce_labels) — Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255.
return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
"none" or ChannelDimension.NONE: image in (height, width) format.
Preprocess an image or batch of images.
post_process_semantic_segmentation
< source >
( outputs target_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation
Parameters
outputs (BeitForSemanticSegmentation) — Raw outputs of the model.
target_sizes (List[Tuple] of length batch_size, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor] of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id.
Converts the output of BeitForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
BeitModel
class transformers.BeitModel
< source >
( config: BeitConfig add_pooling_layer: bool = True )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Beit Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.beit.modeling_beit.BeitModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
A transformers.models.beit.modeling_beit.BeitModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BeitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token will be returned.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BeitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, BeitModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
>>> model = BeitModel.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 197, 768]
BeitForMaskedImageModeling
class transformers.BeitForMaskedImageModeling
< source >
( config: BeitConfig )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Beit Model transformer with a ‘language’ modeling head on top. BEiT does masked image modeling by predicting visual tokens of a Vector-Quantize Variational Autoencoder (VQ-VAE), whereas other vision models like ViT and DeiT predict RGB pixel values. As a result, this class is incompatible with AutoModelForMaskedImageModeling, so you will need to use BeitForMaskedImageModeling directly if you wish to do masked image modeling with BEiT. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BeitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BeitForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoImageProcessor, BeitForMaskedImageModeling
>>> import torch
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
>>> model = BeitForMaskedImageModeling.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
>>> num_patches = (model.config.image_size // model.config.patch_size) ** 2
>>> pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
>>>
>>> bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
>>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
>>> loss, logits = outputs.loss, outputs.logits
>>> list(logits.shape)
[1, 196, 8192]
BeitForImageClassification
class transformers.BeitForImageClassification
< source >
( config: BeitConfig )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Beit Model transformer with an image classification head on top (a linear layer on top of the average of the final hidden states of the patch tokens) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BeitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BeitForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, BeitForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224")
>>> model = BeitForImageClassification.from_pretrained("microsoft/beit-base-patch16-224")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>>
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
BeitForSemanticSegmentation
class transformers.BeitForSemanticSegmentation
< source >
( config: BeitConfig )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Beit Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BeitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BeitForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoImageProcessor, BeitForSemanticSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-finetuned-ade-640-640")
>>> model = BeitForSemanticSegmentation.from_pretrained("microsoft/beit-base-finetuned-ade-640-640")
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>>
>>> logits = outputs.logits
FlaxBeitModel
class transformers.FlaxBeitModel
< source >
( config: BeitConfig input_shape = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
The bare Beit Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( pixel_values bool_masked_pos = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.beit.configuration_beit.BeitConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token will be returned.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBeitPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import AutoImageProcessor, FlaxBeitModel
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> model = FlaxBeitModel.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> inputs = image_processor(images=image, return_tensors="np")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
FlaxBeitForMaskedImageModeling
class transformers.FlaxBeitForMaskedImageModeling
< source >
( config: BeitConfig input_shape = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Beit Model transformer with a ‘language’ modeling head on top (to predict visual tokens).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( pixel_values bool_masked_pos = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.beit.configuration_beit.BeitConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBeitPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
bool_masked_pos (numpy.ndarray of shape (batch_size, num_patches)): Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Examples:
>>> from transformers import AutoImageProcessor, BeitForMaskedImageModeling
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
>>> model = BeitForMaskedImageModeling.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
>>> inputs = image_processor(images=image, return_tensors="np")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxBeitForImageClassification
class transformers.FlaxBeitForImageClassification
< source >
( config: BeitConfig input_shape = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Beit Model transformer with an image classification head on top (a linear layer on top of the average of the final hidden states of the patch tokens) e.g. for ImageNet.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( pixel_values bool_masked_pos = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.beit.configuration_beit.BeitConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBeitPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, FlaxBeitForImageClassification
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224")
>>> model = FlaxBeitForImageClassification.from_pretrained("microsoft/beit-base-patch16-224")
>>> inputs = image_processor(images=image, return_tensors="np")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>>
>>> predicted_class_idx = logits.argmax(-1).item()
>>> print("Predicted class:", model.config.id2label[predicted_class_idx]) |
https://huggingface.co/docs/transformers/big_models | Instantiating a big model
When you want to use a very big pretrained model, one challenge is to minimize the use of the RAM. The usual workflow from PyTorch is:
Create your model with random weights.
Load your pretrained weights.
Put those pretrained weights in your random model.
Step 1 and 2 both require a full version of the model in memory, which is not a problem in most cases, but if your model starts weighing several GigaBytes, those two copies can make you get out of RAM. Even worse, if you are using torch.distributed to launch a distributed training, each process will load the pretrained model and store these two copies in RAM.
Note that the randomly created model is initialized with “empty” tensors, which take the space in memory without filling it (thus the random values are whatever was in this chunk of memory at a given time). The random initialization following the appropriate distribution for the kind of model/parameters instantiated (like a normal distribution for instance) is only performed after step 3 on the non-initialized weights, to be as fast as possible!
In this guide, we explore the solutions Transformers offer to deal with this issue. Note that this is an area of active development, so the APIs explained here may change slightly in the future.
Sharded checkpoints
Since version 4.18.0, model checkpoints that end up taking more than 10GB of space are automatically sharded in smaller pieces. In terms of having one single checkpoint when you do model.save_pretrained(save_dir), you will end up with several partial checkpoints (each of which being of size < 10GB) and an index that maps parameter names to the files they are stored in.
You can control the maximum size before sharding with the max_shard_size parameter, so for the sake of an example, we’ll use a normal-size models with a small shard size: let’s take a traditional BERT model.
from transformers import AutoModel
model = AutoModel.from_pretrained("bert-base-cased")
If you save it using save_pretrained(), you will get a new folder with two files: the config of the model and its weights:
>>> import os
>>> import tempfile
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir)
... print(sorted(os.listdir(tmp_dir)))
['config.json', 'pytorch_model.bin']
Now let’s use a maximum shard size of 200MB:
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... print(sorted(os.listdir(tmp_dir)))
['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json']
On top of the configuration of the model, we see three different weights files, and an index.json file which is our index. A checkpoint like this can be fully reloaded using the from_pretrained() method:
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... new_model = AutoModel.from_pretrained(tmp_dir)
The main advantage of doing this for big models is that during step 2 of the workflow shown above, each shard of the checkpoint is loaded after the previous one, capping the memory usage in RAM to the model size plus the size of the biggest shard.
Behind the scenes, the index file is used to determine which keys are in the checkpoint, and where the corresponding weights are stored. We can load that index like any json and get a dictionary:
>>> import json
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f:
... index = json.load(f)
>>> print(index.keys())
dict_keys(['metadata', 'weight_map'])
The metadata just consists of the total size of the model for now. We plan to add other information in the future:
>>> index["metadata"]
{'total_size': 433245184}
The weights map is the main part of this index, which maps each parameter name (as usually found in a PyTorch model state_dict) to the file it’s stored in:
>>> index["weight_map"]
{'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin',
'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin',
...
If you want to directly load such a sharded checkpoint inside a model without using from_pretrained() (like you would do model.load_state_dict() for a full checkpoint) you should use load_sharded_checkpoint():
>>> from transformers.modeling_utils import load_sharded_checkpoint
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="200MB")
... load_sharded_checkpoint(model, tmp_dir)
Low memory loading
Sharded checkpoints reduce the memory usage during step 2 of the workflow mentioned above, but in order to use that model in a low memory setting, we recommend leveraging our tools based on the Accelerate library.
Please read the following guide for more information: Large model loading using Accelerate |
https://huggingface.co/docs/transformers/model_doc/bert | BERT
Overview
The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It’s a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.
The abstract from the paper is the following:
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
Tips:
BERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation.
Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by:
a special mask token with probability 0.8
a random token different from the one masked with probability 0.1
the same token with probability 0.1
The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not.
This model was contributed by thomwolf. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on BERT Text Classification in a different language.
A notebook for Finetuning BERT (and friends) for multi-label text classification.
A notebook on how to Finetune BERT for multi-label classification using PyTorch. 🌎
A notebook on how to warm-start an EncoderDecoder model with BERT for summarization.
BertForSequenceClassification is supported by this example script and notebook.
TFBertForSequenceClassification is supported by this example script and notebook.
FlaxBertForSequenceClassification is supported by this example script and notebook.
Text classification task guide
Token Classification
A blog post on how to use Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition.
A notebook for Finetuning BERT for named-entity recognition using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this version of the notebook instead.
BertForTokenClassification is supported by this example script and notebook.
TFBertForTokenClassification is supported by this example script and notebook.
FlaxBertForTokenClassification is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide
Fill-Mask
BertForMaskedLM is supported by this example script and notebook.
TFBertForMaskedLM is supported by this example script and notebook.
FlaxBertForMaskedLM is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Question Answering
BertForQuestionAnswering is supported by this example script and notebook.
TFBertForQuestionAnswering is supported by this example script and notebook.
FlaxBertForQuestionAnswering is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice
BertForMultipleChoice is supported by this example script and notebook.
TFBertForMultipleChoice is supported by this example script and notebook.
Multiple choice task guide
⚡️ Inference
A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia.
A blog post on how to Accelerate BERT inference with DeepSpeed-Inference on GPUs.
⚙️ Pretraining
A blog post on Pre-Training BERT with Hugging Face Transformers and Habana Gaudi.
🚀 Deploy
A blog post on how to Convert Transformers to ONNX with Hugging Face Optimum.
A blog post on how to Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS.
A blog post on Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module.
A blog post on Serverless BERT with HuggingFace, AWS Lambda, and Docker.
A blog post on Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler.
A blog post on Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker.
BertConfig
class transformers.BertConfig
< source >
( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' use_cache = True classifier_dropout = None **kwargs )
Parameters
vocab_size (int, optional, defaults to 30522) — Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BertModel or TFBertModel.
hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling BertModel or TFBertModel.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.
classifier_dropout (float, optional) — The dropout ratio for the classification head.
This is the configuration class to store the configuration of a BertModel or a TFBertModel. It is used to instantiate a BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BERT bert-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import BertConfig, BertModel
>>>
>>> configuration = BertConfig()
>>>
>>> model = BertModel(configuration)
>>>
>>> configuration = model.config
BertTokenizer
class transformers.BertTokenizer
< source >
( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs )
Parameters
vocab_file (str) — File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) — Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) — Collection of tokens which will never be split during tokenization. Only has an effect when do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this issue).
strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original BERT).
Construct a BERT tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
< source >
( save_directory: str filename_prefix: typing.Optional[str] = None )
BertTokenizerFast
class transformers.BertTokenizerFast
< source >
( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs )
Parameters
vocab_file (str) — File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue).
strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original BERT).
wordpieces_prefix (str, optional, defaults to "##") — The prefix for subwords.
Construct a “fast” BERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0 token_ids_1 = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
TFBertTokenizer
class transformers.TFBertTokenizer
< source >
( *args **kwargs )
Parameters
vocab_list (list) — List containing the vocabulary.
do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing.
cls_token_id (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
sep_token_id (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
pad_token_id (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths.
padding (str, defaults to "longest") — The type of padding to use. Can be either "longest", to pad only up to the longest sample in the batch, or `“max_length”, to pad all inputs to the maximum length supported by the tokenizer.
truncation (bool, optional, defaults to True) — Whether to truncate the sequence to the maximum length.
max_length (int, optional, defaults to 512) — The maximum length of the sequence, used for padding (if padding is “max_length”) and/or truncation (if truncation is True).
pad_to_multiple_of (int, optional, defaults to None) — If set, the sequence will be padded to a multiple of this value.
return_token_type_ids (bool, optional, defaults to True) — Whether to return token_type_ids.
return_attention_mask (bool, optional, defaults to True) — Whether to return the attention_mask.
use_fast_bert_tokenizer (bool, optional, defaults to True) — If True, will use the FastBertTokenizer class from Tensorflow Text. If False, will use the BertTokenizer class instead. BertTokenizer supports some additional options, but is slower and cannot be exported to TFLite.
This is an in-graph tokenizer for BERT. It should be initialized similarly to other tokenizers, using the from_pretrained() method. It can also be initialized with the from_tokenizer() method, which imports settings from an existing standard tokenizer object.
In-graph tokenizers, unlike other Hugging Face tokenizers, are actually Keras layers and are designed to be run when the model is called, rather than during preprocessing. As a result, they have somewhat more limited options than standard tokenizer classes. They are most useful when you want to create an end-to-end model that goes straight from tf.string inputs to outputs.
from_pretrained
< source >
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] *init_inputs **kwargs )
Parameters
pretrained_model_name_or_path (str or os.PathLike) — The name or path to the pre-trained tokenizer.
Instantiate a TFBertTokenizer from a pre-trained tokenizer.
Examples:
from transformers import TFBertTokenizer
tf_tokenizer = TFBertTokenizer.from_pretrained("bert-base-uncased")
from_tokenizer
< source >
( tokenizer: PreTrainedTokenizerBase **kwargs )
Parameters
tokenizer (PreTrainedTokenizerBase) — The tokenizer to use to initialize the TFBertTokenizer.
Initialize a TFBertTokenizer from an existing Tokenizer.
Examples:
from transformers import AutoTokenizer, TFBertTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tf_tokenizer = TFBertTokenizer.from_tokenizer(tokenizer)
Bert specific outputs
class transformers.models.bert.modeling_bert.BertForPreTrainingOutput
< source >
( loss: typing.Optional[torch.FloatTensor] = None prediction_logits: FloatTensor = None seq_relationship_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of BertForPreTraining.
class transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput
< source >
( loss: tf.Tensor | None = None prediction_logits: tf.Tensor = None seq_relationship_logits: tf.Tensor = None hidden_states: Optional[Union[Tuple[tf.Tensor], tf.Tensor]] = None attentions: Optional[Union[Tuple[tf.Tensor], tf.Tensor]] = None )
Parameters
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of TFBertForPreTraining.
class transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput
< source >
( prediction_logits: Array = None seq_relationship_logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None )
Parameters
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Output type of BertForPreTraining.
“Returns a new object replacing the specified fields with new values.
BertModel
class transformers.BertModel
< source >
( config add_pooling_layer = True )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The BertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
BertForPreTraining
class transformers.BertForPreTraining
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None next_sentence_label: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.bert.modeling_bert.BertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional): Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] next_sentence_label (torch.LongTensor of shape (batch_size,), optional): Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence. kwargs (Dict[str, any], optional, defaults to {}): Used to hide legacy arguments that have been deprecated.
A transformers.models.bert.modeling_bert.BertForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertForPreTraining
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = BertForPreTraining.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> prediction_logits = outputs.prediction_logits
>>> seq_relationship_logits = outputs.seq_relationship_logits
BertLMHeadModel
class transformers.BertLMHeadModel
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The BertLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import AutoTokenizer, BertLMHeadModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = BertLMHeadModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits
BertForMaskedLM
class transformers.BertForMaskedLM
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertForMaskedLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = BertForMaskedLM.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>>
>>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
>>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
>>> tokenizer.decode(predicted_token_id)
'paris'
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
>>>
>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(outputs.loss.item(), 2)
0.88
BertForNextSentencePrediction
class transformers.BertForNextSentencePrediction
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertForNextSentencePrediction
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = BertForNextSentencePrediction.from_pretrained("bert-base-uncased")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> next_sentence = "The sky is blue due to the shorter wavelength of blue light."
>>> encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
>>> outputs = model(**encoding, labels=torch.LongTensor([1]))
>>> logits = outputs.logits
>>> assert logits[0, 0] < logits[0, 1]
BertForSequenceClassification
class transformers.BertForSequenceClassification
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, BertForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
>>> model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
'LABEL_1'
>>>
>>> num_labels = len(model.config.id2label)
>>> model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity", num_labels=num_labels)
>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
0.01
Example of multi-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, BertForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
>>> model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
>>>
>>> num_labels = len(model.config.id2label)
>>> model = BertForSequenceClassification.from_pretrained(
... "textattack/bert-base-uncased-yelp-polarity", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).loss
BertForMultipleChoice
class transformers.BertForMultipleChoice
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertForMultipleChoice
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = BertForMultipleChoice.from_pretrained("bert-base-uncased")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0)
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels)
>>>
>>> loss = outputs.loss
>>> logits = outputs.logits
BertForTokenClassification
class transformers.BertForTokenClassification
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertForTokenClassification
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> model = BertForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_token_class_ids = logits.argmax(-1)
>>>
>>>
>>>
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> predicted_tokens_classes
['O', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']
>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
0.01
BertForQuestionAnswering
class transformers.BertForQuestionAnswering
< source >
( config )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BertForQuestionAnswering
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("deepset/bert-base-cased-squad2")
>>> model = BertForQuestionAnswering.from_pretrained("deepset/bert-base-cased-squad2")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'a nice puppet'
>>>
>>> target_start_index = torch.tensor([14])
>>> target_end_index = torch.tensor([15])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = outputs.loss
>>> round(loss.item(), 2)
7.41
TFBertModel
class transformers.TFBertModel
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
The TFBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBertModel
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFBertModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)
>>> last_hidden_states = outputs.last_hidden_state
TFBertForPreTraining
class transformers.TFBertForPreTraining
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None next_sentence_label: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (tf.Tensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated.
A transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> import tensorflow as tf
>>> from transformers import AutoTokenizer, TFBertForPreTraining
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFBertForPreTraining.from_pretrained("bert-base-uncased")
>>> input_ids = tokenizer("Hello, my dog is cute", add_special_tokens=True, return_tensors="tf")
>>>
>>> outputs = model(input_ids)
>>> prediction_logits, seq_relationship_logits = outputs[:2]
TFBertModelLMHeadModel
class transformers.TFBertLMHeadModel
< source >
( *args **kwargs )
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False **kwargs ) → transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True): If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional): Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Example:
>>> from transformers import AutoTokenizer, TFBertLMHeadModel
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFBertLMHeadModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)
>>> logits = outputs.logits
TFBertForMaskedLM
class transformers.TFBertForMaskedLM
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBertForMaskedLM
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFBertForMaskedLM.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
>>> logits = model(**inputs).logits
>>>
>>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
>>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
>>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
>>> tokenizer.decode(predicted_token_id)
'paris'
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
>>>
>>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(float(outputs.loss), 2)
0.88
TFBertForNextSentencePrediction
class transformers.TFBertForNextSentencePrediction
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None next_sentence_label: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
A transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when next_sentence_label is provided) — Next sentence prediction loss.
logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> import tensorflow as tf
>>> from transformers import AutoTokenizer, TFBertForNextSentencePrediction
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFBertForNextSentencePrediction.from_pretrained("bert-base-uncased")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> next_sentence = "The sky is blue due to the shorter wavelength of blue light."
>>> encoding = tokenizer(prompt, next_sentence, return_tensors="tf")
>>> logits = model(encoding["input_ids"], token_type_ids=encoding["token_type_ids"])[0]
>>> assert logits[0][0] < logits[0][1]
TFBertForSequenceClassification
class transformers.TFBertForSequenceClassification
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBertForSequenceClassification
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/bert-base-uncased-yelp-polarity")
>>> model = TFBertForSequenceClassification.from_pretrained("ydshieh/bert-base-uncased-yelp-polarity")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> logits = model(**inputs).logits
>>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
>>> model.config.id2label[predicted_class_id]
'LABEL_1'
>>>
>>> num_labels = len(model.config.id2label)
>>> model = TFBertForSequenceClassification.from_pretrained("ydshieh/bert-base-uncased-yelp-polarity", num_labels=num_labels)
>>> labels = tf.constant(1)
>>> loss = model(**inputs, labels=labels).loss
>>> round(float(loss), 2)
0.01
TFBertForMultipleChoice
class transformers.TFBertForMultipleChoice
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBertForMultipleChoice
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFBertForMultipleChoice.from_pretrained("bert-base-uncased")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
>>> outputs = model(inputs)
>>>
>>> logits = outputs.logits
TFBertForTokenClassification
class transformers.TFBertForTokenClassification
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBertForTokenClassification
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> model = TFBertForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
>>> logits = model(**inputs).logits
>>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
>>>
>>>
>>>
>>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
>>> predicted_tokens_classes
['O', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']
>>> labels = predicted_token_class_ids
>>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
>>> round(float(loss), 2)
0.01
TFBertForQuestionAnswering
class transformers.TFBertForQuestionAnswering
< source >
( *args **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function!
call
< source >
( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The TFBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, TFBertForQuestionAnswering
>>> import tensorflow as tf
>>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/bert-base-cased-squad2")
>>> model = TFBertForQuestionAnswering.from_pretrained("ydshieh/bert-base-cased-squad2")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="tf")
>>> outputs = model(**inputs)
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'a nice puppet'
>>>
>>> target_start_index = tf.constant([14])
>>> target_end_index = tf.constant([15])
>>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
>>> loss = tf.math.reduce_mean(outputs.loss)
>>> round(float(loss), 2)
7.41
FlaxBertModel
class transformers.FlaxBertModel
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
FlaxBertForPreTraining
class transformers.FlaxBertForPreTraining
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForPreTraining
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForPreTraining.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
>>> outputs = model(**inputs)
>>> prediction_logits = outputs.prediction_logits
>>> seq_relationship_logits = outputs.seq_relationship_logits
FlaxBertForCausalLM
class transformers.FlaxBertForCausalLM
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForCausalLM.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
>>> outputs = model(**inputs)
>>>
>>> next_token_logits = outputs.logits[:, -1]
FlaxBertForMaskedLM
class transformers.FlaxBertForMaskedLM
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForMaskedLM.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxBertForNextSentencePrediction
class transformers.FlaxBertForNextSentencePrediction
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForNextSentencePrediction
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForNextSentencePrediction.from_pretrained("bert-base-uncased")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> next_sentence = "The sky is blue due to the shorter wavelength of blue light."
>>> encoding = tokenizer(prompt, next_sentence, return_tensors="jax")
>>> outputs = model(**encoding)
>>> logits = outputs.logits
>>> assert logits[0, 0] < logits[0, 1]
FlaxBertForSequenceClassification
class transformers.FlaxBertForSequenceClassification
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForSequenceClassification.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxBertForMultipleChoice
class transformers.FlaxBertForMultipleChoice
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForMultipleChoice
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForMultipleChoice.from_pretrained("bert-base-uncased")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
>>> outputs = model(**{k: v[None, :] for k, v in encoding.items()})
>>> logits = outputs.logits
FlaxBertForTokenClassification
class transformers.FlaxBertForTokenClassification
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForTokenClassification.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
FlaxBertForQuestionAnswering
class transformers.FlaxBertForQuestionAnswering
< source >
( config: BertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs )
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
< source >
( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BertConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, FlaxBertForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = FlaxBertForQuestionAnswering.from_pretrained("bert-base-uncased")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="jax")
>>> outputs = model(**inputs)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits |
https://huggingface.co/docs/transformers/model_doc/bertweet | BERTweet
Overview
The BERTweet model was proposed in BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification.
Example of use:
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
>>>
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
>>>
>>>
>>>
>>> line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
>>> input_ids = torch.tensor([tokenizer.encode(line)])
>>> with torch.no_grad():
... features = bertweet(input_ids)
>>>
>>>
>>>
This model was contributed by dqnguyen. The original code can be found here.
BertweetTokenizer
class transformers.BertweetTokenizer
< source >
( vocab_file merges_file normalization = False bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' **kwargs )
Parameters
vocab_file (str) — Path to the vocabulary file.
merges_file (str) — Path to the merges file.
normalization (bool, optional, defaults to False) — Whether or not to apply a normalization preprocess.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
Constructs a BERTweet tokenizer, using Byte-Pair-Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
Loads a pre-existing dictionary from a text file and adds its symbols to this instance.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERTweet sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BERTweet does not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
Normalize tokens in a Tweet |
https://huggingface.co/docs/transformers/model_doc/bort | This model is in maintenance mode only, so we won’t accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: pip install -U transformers==4.30.0.
The BORT model was proposed in Optimal Subarchitecture Extraction for BERT by Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the authors refer to as “Bort”.
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as “Bort”, is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. |
https://huggingface.co/docs/transformers/model_doc/byt5 | ByT5
Overview
The ByT5 model was presented in ByT5: Towards a token-free future with pre-trained byte-to-byte models by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
The abstract from the paper is the following:
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
This model was contributed by patrickvonplaten. The original code can be found here.
ByT5’s architecture is based on the T5v1.1 model, so one can refer to T5v1.1’s documentation page. They only differ in how inputs should be prepared for the model, see the code examples below.
Since ByT5 was pre-trained unsupervisedly, there’s no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Example
ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer:
>>> from transformers import T5ForConditionalGeneration
>>> import torch
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
>>> num_special_tokens = 3
>>>
>>>
>>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens
>>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens
>>> loss = model(input_ids, labels=labels).loss
>>> loss.item()
2.66
For batched inference and training it is however recommended to make use of the tokenizer:
>>> from transformers import T5ForConditionalGeneration, AutoTokenizer
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small")
>>> model_inputs = tokenizer(
... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt"
... )
>>> labels_dict = tokenizer(
... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt"
... )
>>> labels = labels_dict.input_ids
>>> loss = model(**model_inputs, labels=labels).loss
>>> loss.item()
17.9
Similar to T5, ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let’s corrupt some characters of the input sentence "The dog chases a ball in the park." and ask ByT5 to predict them for us.
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base")
>>> input_ids_prompt = "The dog chases a ball in the park."
>>> input_ids = tokenizer(input_ids_prompt).input_ids
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]])
>>> input_ids
tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]])
>>>
>>> output_ids = model.generate(input_ids, max_length=100)[0].tolist()
>>> output_ids
[0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49]
>>>
>>>
>>> output_ids_list = []
>>> start_token = 0
>>> sentinel_token = 258
>>> while sentinel_token in output_ids:
... split_idx = output_ids.index(sentinel_token)
... output_ids_list.append(output_ids[start_token:split_idx])
... start_token = split_idx
... sentinel_token -= 1
>>> output_ids_list.append(output_ids[start_token:])
>>> output_string = tokenizer.batch_decode(output_ids_list)
>>> output_string
['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.']
ByT5Tokenizer
class transformers.ByT5Tokenizer
< source >
( eos_token = '</s>' unk_token = '<unk>' pad_token = '<pad>' extra_ids = 125 additional_special_tokens = None **kwargs )
Parameters
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
extra_ids (int, optional, defaults to 100) — Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as “id{%d}>” where ”{%d}” is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning (“” is the last token in the vocabulary like in ByT5 preprocessing see here).
additional_special_tokens (List[str], optional) — Additional special tokens used by the tokenizer.
Construct a ByT5 tokenizer. ByT5 simply uses raw bytes utf-8 encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A sequence has the following format:
single sequence: X </s>
pair of sequences: A </s> B </s>
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. ByT5 does not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
< source >
( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model.
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
See ByT5Tokenizer for all details. |
https://huggingface.co/docs/transformers/model_doc/bros | BROS
Overview
The BROS model was proposed in BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents by Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park.
BROS stands for BERT Relying On Spatiality. It is an encoder-only Transformer model that takes a sequence of tokens and their bounding boxes as inputs and outputs a sequence of hidden states. BROS encode relative spatial information instead of using absolute spatial information.
It is pre-trained with two objectives: a token-masked language modeling objective (TMLM) used in BERT, and a novel area-masked language modeling objective (AMLM) In TMLM, tokens are randomly masked, and the model predicts the masked tokens using spatial information and other unmasked tokens. AMLM is a 2D version of TMLM. It randomly masks text tokens and predicts with the same information as TMLM, but it masks text blocks (areas).
BrosForTokenClassification has a simple linear layer on top of BrosModel. It predicts the label of each token. BrosSpadeEEForTokenClassification has an initial_token_classifier and subsequent_token_classifier on top of BrosModel. initial_token_classifier is used to predict the first token of each entity, and subsequent_token_classifier is used to predict the next token of within entity. BrosSpadeELForTokenClassification has an entity_linker on top of BrosModel. entity_linker is used to predict the relation between two entities.
BrosForTokenClassification and BrosSpadeEEForTokenClassification essentially perform the same job. However, BrosForTokenClassification assumes input tokens are perfectly serialized (which is very challenging task since they exist in a 2D space), while BrosSpadeEEForTokenClassification allows for more flexibility in handling serialization errors as it predicts next connection tokens from one token.
BrosSpadeELForTokenClassification perform the intra-entity linking task. It predicts relation from one token (of one entity) to another token (of another entity) if these two entities share some relation.
BROS achieves comparable or better result on Key Information Extraction (KIE) benchmarks such as FUNSD, SROIE, CORD and SciTSR, without relying on explicit visual features.
The abstract from the paper is the following:
Key information extraction (KIE) from document images requires understanding the contextual and spatial semantics of texts in two-dimensional (2D) space. Many recent studies try to solve the task by developing pre-trained language models focusing on combining visual features from document images with texts and their layout. On the other hand, this paper tackles the problem by going back to the basic: effective combination of text and layout. Specifically, we propose a pre-trained language model, named BROS (BERT Relying On Spatiality), that encodes relative positions of texts in 2D space and learns from unlabeled documents with area-masking strategy. With this optimized training scheme for understanding texts in 2D space, BROS shows comparable or better performance compared to previous methods on four KIE benchmarks (FUNSD, SROIE, CORD, and SciTSR) without relying on visual features. This paper also reveals two real-world challenges in KIE tasks-(1) minimizing the error from incorrect text ordering and (2) efficient learning from fewer downstream examples-and demonstrates the superiority of BROS over previous methods.*
Tips:
forward() requires input_ids and bbox (bounding box). Each bounding box should be in (x0, y0, x1, y1) format (top-left corner, bottom-right corner). Obtaining of Bounding boxes depends on external OCR system. The x coordinate should be normalized by document image width, and the y coordinate should be normalized by document image height.
def expand_and_normalize_bbox(bboxes, doc_width, doc_height):
bboxes[:, [0, 2]] = bboxes[:, [0, 2]] / width
bboxes[:, [1, 3]] = bboxes[:, [1, 3]] / height
[~transformers.BrosForTokenClassification.forward, ~transformers.BrosSpadeEEForTokenClassification.forward, ~transformers.BrosSpadeEEForTokenClassification.forward] require not only input_ids and bbox but also box_first_token_mask for loss calculation. It is a mask to filter out non-first tokens of each box. You can obtain this mask by saving start token indices of bounding boxes when creating input_ids from words. You can make box_first_token_mask with following code,
def make_box_first_token_mask(bboxes, words, tokenizer, max_seq_length=512):
box_first_token_mask = np.zeros(max_seq_length, dtype=np.bool_)
input_ids_list: List[List[int]] = [tokenizer.encode(e, add_special_tokens=False) for e in words]
tokens_length_list: List[int] = [len(l) for l in input_ids_list]
box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list)))
box_start_token_indices = box_end_token_indices - np.array(tokens_length_list)
box_end_token_indices = box_end_token_indices[box_end_token_indices < max_seq_length - 1]
if len(box_start_token_indices) > len(box_end_token_indices):
box_start_token_indices = box_start_token_indices[: len(box_end_token_indices)]
box_first_token_mask[box_start_token_indices] = True
return box_first_token_mask
Demo scripts can be found here.
This model was contributed by jinho8345. The original code can be found here.
BrosConfig
class transformers.BrosConfig
< source >
( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 dim_bbox = 8 bbox_scale = 100.0 n_relations = 1 classifier_dropout_prob = 0.1 **kwargs )
Parameters
vocab_size (int, optional, defaults to 30522) — Vocabulary size of the Bros model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BrosModel or TFBrosModel.
hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling BrosModel or TFBrosModel.
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
pad_token_id (int, optional, defaults to 0) — The index of the padding token in the token vocabulary.
dim_bbox (int, optional, defaults to 8) — The dimension of the bounding box coordinates. (x0, y1, x1, y0, x1, y1, x0, y1)
bbox_scale (float, optional, defaults to 100.0) — The scale factor of the bounding box coordinates.
n_relations (int, optional, defaults to 1) — The number of relations for SpadeEE(entity extraction), SpadeEL(entity linking) head.
classifier_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the classifier head.
This is the configuration class to store the configuration of a BrosModel or a TFBrosModel. It is used to instantiate a Bros model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Bros jinho8345/bros-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import BrosConfig, BrosModel
>>>
>>> configuration = BrosConfig()
>>>
>>> model = BrosModel(configuration)
>>>
>>> configuration = model.config
BrosProcessor
class transformers.BrosProcessor
< source >
( tokenizer = None **kwargs )
Parameters
tokenizer (BertTokenizerFast) — An instance of [‘BertTokenizerFast`]. The tokenizer is a required input.
Constructs a Bros processor which wraps a BERT tokenizer.
BrosProcessor offers all the functionalities of BertTokenizerFast. See the docstring of call() and decode() for more information.
__call__
< source >
( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 pad_to_multiple_of: typing.Optional[int] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs )
This method uses BertTokenizerFast.call() to prepare text for the model.
Please refer to the docstring of the above two methods for more information.
BrosModel
class transformers.BrosModel
< source >
( config add_pooling_layer = True )
Parameters
config (BrosConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Bros Model transformer outputting raw hidden-states without any specific head on top. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None bbox: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using BrosProcessor. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (‘torch.FloatTensor’ of shape ‘(batch_size, num_boxes, 4)’) — Bounding box coordinates for each token in the input sequence. Each bounding box is a list of four values (x1, y1, x2, y2), where (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner of the bounding box.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
bbox_first_token_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to indicate the first token of each bounding box. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BrosConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The BrosModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> import torch
>>> from transformers import BrosProcessor, BrosModel
>>> processor = BrosProcessor.from_pretrained("jinho8345/bros-base-uncased")
>>> model = BrosModel.from_pretrained("jinho8345/bros-base-uncased")
>>> encoding = processor("Hello, my dog is cute", add_special_tokens=False, return_tensors="pt")
>>> bbox = torch.tensor([[[0, 0, 1, 1]]]).repeat(1, encoding["input_ids"].shape[-1], 1)
>>> encoding["bbox"] = bbox
>>> outputs = model(**encoding)
>>> last_hidden_states = outputs.last_hidden_state
BrosForTokenClassification
class transformers.BrosForTokenClassification
< source >
( config )
Parameters
config (BrosConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bros Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None bbox: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None bbox_first_token_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using BrosProcessor. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (‘torch.FloatTensor’ of shape ‘(batch_size, num_boxes, 4)’) — Bounding box coordinates for each token in the input sequence. Each bounding box is a list of four values (x1, y1, x2, y2), where (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner of the bounding box.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
bbox_first_token_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to indicate the first token of each bounding box. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BrosConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BrosForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> import torch
>>> from transformers import BrosProcessor, BrosForTokenClassification
>>> processor = BrosProcessor.from_pretrained("jinho8345/bros-base-uncased")
>>> model = BrosForTokenClassification.from_pretrained("jinho8345/bros-base-uncased")
>>> encoding = processor("Hello, my dog is cute", add_special_tokens=False, return_tensors="pt")
>>> bbox = torch.tensor([[[0, 0, 1, 1]]]).repeat(1, encoding["input_ids"].shape[-1], 1)
>>> encoding["bbox"] = bbox
>>> outputs = model(**encoding)
BrosSpadeEEForTokenClassification
class transformers.BrosSpadeEEForTokenClassification
< source >
( config )
Parameters
config (BrosConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bros Model with a token classification head on top (initial_token_layers and subsequent_token_layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The initial_token_classifier is used to predict the first token of each entity, and the subsequent_token_classifier is used to predict the subsequent tokens within an entity. Compared to BrosForTokenClassification, this model is more robust to serialization errors since it predicts next token from one token.
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None bbox: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None bbox_first_token_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None initial_token_labels: typing.Optional[torch.Tensor] = None subsequent_token_labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.bros.modeling_bros.BrosSpadeOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using BrosProcessor. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (‘torch.FloatTensor’ of shape ‘(batch_size, num_boxes, 4)’) — Bounding box coordinates for each token in the input sequence. Each bounding box is a list of four values (x1, y1, x2, y2), where (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner of the bounding box.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
bbox_first_token_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to indicate the first token of each bounding box. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.bros.modeling_bros.BrosSpadeOutput or tuple(torch.FloatTensor)
A transformers.models.bros.modeling_bros.BrosSpadeOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BrosConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
initial_token_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores for entity initial tokens (before SoftMax).
subsequent_token_logits (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length+1)) — Classification scores for entity sequence tokens (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BrosSpadeEEForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> import torch
>>> from transformers import BrosProcessor, BrosSpadeEEForTokenClassification
>>> processor = BrosProcessor.from_pretrained("jinho8345/bros-base-uncased")
>>> model = BrosSpadeEEForTokenClassification.from_pretrained("jinho8345/bros-base-uncased")
>>> encoding = processor("Hello, my dog is cute", add_special_tokens=False, return_tensors="pt")
>>> bbox = torch.tensor([[[0, 0, 1, 1]]]).repeat(1, encoding["input_ids"].shape[-1], 1)
>>> encoding["bbox"] = bbox
>>> outputs = model(**encoding)
BrosSpadeELForTokenClassification
class transformers.BrosSpadeELForTokenClassification
< source >
( config )
Parameters
config (BrosConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
Bros Model with a token classification head on top (a entity_linker layer on top of the hidden-states output) e.g. for Entity-Linking. The entity_linker is used to predict intra-entity links (one entity to another entity).
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.Tensor] = None bbox: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None bbox_first_token_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using BrosProcessor. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (‘torch.FloatTensor’ of shape ‘(batch_size, num_boxes, 4)’) — Bounding box coordinates for each token in the input sequence. Each bounding box is a list of four values (x1, y1, x2, y2), where (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner of the bounding box.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
bbox_first_token_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to indicate the first token of each bounding box. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BrosConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BrosSpadeELForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> import torch
>>> from transformers import BrosProcessor, BrosSpadeELForTokenClassification
>>> processor = BrosProcessor.from_pretrained("jinho8345/bros-base-uncased")
>>> model = BrosSpadeELForTokenClassification.from_pretrained("jinho8345/bros-base-uncased")
>>> encoding = processor("Hello, my dog is cute", add_special_tokens=False, return_tensors="pt")
>>> bbox = torch.tensor([[[0, 0, 1, 1]]]).repeat(1, encoding["input_ids"].shape[-1], 1)
>>> encoding["bbox"] = bbox
>>> outputs = model(**encoding) |
https://huggingface.co/docs/transformers/model_doc/biogpt | BioGPT
Overview
The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.
The abstract from the paper is the following:
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
Tips:
BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
This model was contributed by kamalkraj. The original code can be found here.
Documentation resources
Causal language modeling task guide
BioGptConfig
class transformers.BioGptConfig
< source >
( vocab_size = 42384 hidden_size = 1024 num_hidden_layers = 24 num_attention_heads = 16 intermediate_size = 4096 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 1024 initializer_range = 0.02 layer_norm_eps = 1e-12 scale_embedding = True use_cache = True layerdrop = 0.0 activation_dropout = 0.0 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 **kwargs )
Parameters
vocab_size (int, optional, defaults to 42384) — Vocabulary size of the BioGPT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BioGptModel.
hidden_size (int, optional, defaults to 1024) — Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) — Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 4096) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
scale_embedding (bool, optional, defaults to True) — Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.
layerdrop (float, optional, defaults to 0.0) — Please refer to the paper about LayerDrop: https://arxiv.org/abs/1909.11556 for further details
activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer.
pad_token_id (int, optional, defaults to 1) — Padding token id.
bos_token_id (int, optional, defaults to 0) — Beginning of stream token id.
eos_token_id (int, optional, defaults to 2) — End of stream token id. Example —
This is the configuration class to store the configuration of a BioGptModel. It is used to instantiate an BioGPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BioGPT microsoft/biogpt architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import BioGptModel, BioGptConfig
>>>
>>> configuration = BioGptConfig()
>>>
>>> model = BioGptModel(configuration)
>>>
>>> configuration = model.config
BioGptTokenizer
class transformers.BioGptTokenizer
< source >
( vocab_file merges_file unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' pad_token = '<pad>' **kwargs )
Parameters
vocab_file (str) — Path to the vocabulary file.
merges_file (str) — Merges file.
unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") — The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths.
Construct an FAIRSEQ Transformer tokenizer. Moses tokenization followed by Byte-Pair Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
save_vocabulary
< source >
( save_directory: str filename_prefix: typing.Optional[str] = None )
BioGptModel
class transformers.BioGptModel
< source >
( config: BioGptConfig )
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare BioGPT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BioGptConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
The BioGptModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BioGptModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
>>> model = BioGptModel.from_pretrained("microsoft/biogpt")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
BioGptForCausalLM
class transformers.BioGptForCausalLM
< source >
( config )
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
BioGPT Model with a language modeling head on top for CLM fine-tuning. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BioGptConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The BioGptForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import AutoTokenizer, BioGptForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
>>> model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits
BioGptForTokenClassification
class transformers.BioGptForTokenClassification
< source >
( config )
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
BioGPT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BioGptConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BioGptForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, BioGptForTokenClassification
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
>>> model = BioGptForTokenClassification.from_pretrained("microsoft/biogpt")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_token_class_ids = logits.argmax(-1)
>>>
>>>
>>>
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss
BioGptForSequenceClassification
class transformers.BioGptForSequenceClassification
< source >
( config: BioGptConfig )
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The BioGpt Model transformer with a sequence classification head on top (linear layer).
BioGptForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do.
Since it does classification on the last token, it is required to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BioGptConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The BioGptForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, BioGptForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
>>> model = BioGptForSequenceClassification.from_pretrained("microsoft/biogpt")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>>
>>> num_labels = len(model.config.id2label)
>>> model = BioGptForSequenceClassification.from_pretrained("microsoft/biogpt", num_labels=num_labels)
>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, BioGptForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
>>> model = BioGptForSequenceClassification.from_pretrained("microsoft/biogpt", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
>>>
>>> num_labels = len(model.config.id2label)
>>> model = BioGptForSequenceClassification.from_pretrained(
... "microsoft/biogpt", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).loss |