text
stringlengths 2
11.8k
|
---|
Please note that since the gold labels are available on the test set, evaluation is performed on the test set.
An example using these processors is given in the run_xnli.py script.
SQuAD
The Stanford Question Answering Dataset (SQuAD) is a benchmark that
evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version
(v1.1) was released together with the paper SQuAD: 100,000+ Questions for Machine Comprehension of Text. The second version (v2.0) was released alongside the paper Know What You Don't
Know: Unanswerable Questions for SQuAD.
This library hosts a processor for each of the two versions:
Processors
Those processors are: |
[~data.processors.utils.SquadV1Processor]
[~data.processors.utils.SquadV2Processor] |
They both inherit from the abstract class [~data.processors.utils.SquadProcessor]
[[autodoc]] data.processors.squad.SquadProcessor
- all
Additionally, the following method can be used to convert SQuAD examples into
[~data.processors.utils.SquadFeatures] that can be used as model inputs.
[[autodoc]] data.processors.squad.squad_convert_examples_to_features
These processors as well as the aforementioned method can be used with files containing the data as well as with the
tensorflow_datasets package. Examples are given below.
Example usage
Here is an example using the processors as well as the conversion method using data files:
thon
Loading a V2 processor
processor = SquadV2Processor()
examples = processor.get_dev_examples(squad_v2_data_dir)
Loading a V1 processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(squad_v1_data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
) |
Using tensorflow_datasets is as easy as using a data file:
thon
tensorflow_datasets only handle Squad V1.
tfds_examples = tfds.load("squad")
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
) |
Another example using these processors is given in the run_squad.py script. |
Trainer
The [Trainer] class provides an API for feature-complete training in PyTorch, and it supports distributed training on multiple GPUs/TPUs, mixed precision for NVIDIA GPUs, AMD GPUs, and torch.amp for PyTorch. [Trainer] goes hand-in-hand with the [TrainingArguments] class, which offers a wide range of options to customize how a model is trained. Together, these two classes provide a complete training API.
[Seq2SeqTrainer] and [Seq2SeqTrainingArguments] inherit from the [Trainer] and [TrainingArgument] classes and they're adapted for training models for sequence-to-sequence tasks such as summarization or translation. |
The [Trainer] class is optimized for 馃 Transformers models and can have surprising behaviors
when used with other models. When using it with your own model, make sure: |
your model always return tuples or subclasses of [~utils.ModelOutput]
your model can compute the loss if a labels argument is provided and that loss is returned as the first
element of the tuple (if your model returns tuples)
your model can accept multiple label arguments (use label_names in [TrainingArguments] to indicate their name to the [Trainer]) but none of them should be named "label" |
Trainer[[api-reference]]
[[autodoc]] Trainer
- all
Seq2SeqTrainer
[[autodoc]] Seq2SeqTrainer
- evaluate
- predict
TrainingArguments
[[autodoc]] TrainingArguments
- all
Seq2SeqTrainingArguments
[[autodoc]] Seq2SeqTrainingArguments
- all |
Data Collator
Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of
the same type as the elements of train_dataset or eval_dataset.
To be able to build batches, data collators may apply some processing (like padding). Some of them (like
[DataCollatorForLanguageModeling]) also apply some random data augmentation (like random masking)
on the formed batch.
Examples of use can be found in the example scripts or example notebooks.
Default data collator
[[autodoc]] data.data_collator.default_data_collator
DefaultDataCollator
[[autodoc]] data.data_collator.DefaultDataCollator
DataCollatorWithPadding
[[autodoc]] data.data_collator.DataCollatorWithPadding
DataCollatorForTokenClassification
[[autodoc]] data.data_collator.DataCollatorForTokenClassification
DataCollatorForSeq2Seq
[[autodoc]] data.data_collator.DataCollatorForSeq2Seq
DataCollatorForLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
DataCollatorForWholeWordMask
[[autodoc]] data.data_collator.DataCollatorForWholeWordMask
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
DataCollatorForPermutationLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens |
DeepSpeed
DeepSpeed, powered by Zero Redundancy Optimizer (ZeRO), is an optimization library for training and fitting very large models onto a GPU. It is available in several ZeRO stages, where each stage progressively saves more GPU memory by partitioning the optimizer state, gradients, parameters, and enabling offloading to a CPU or NVMe. DeepSpeed is integrated with the [Trainer] class and most of the setup is automatically taken care of for you.
However, if you want to use DeepSpeed without the [Trainer], Transformers provides a [HfDeepSpeedConfig] class. |
Learn more about using DeepSpeed with [Trainer] in the DeepSpeed guide.
HfDeepSpeedConfig
[[autodoc]] integrations.HfDeepSpeedConfig
- all |
Configuration
The base class [PretrainedConfig] implements the common methods for loading/saving a configuration
either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded
from HuggingFace's AWS S3 repository).
Each derived config class implements model specific attributes. Common attributes present in all config classes are:
hidden_size, num_attention_heads, and num_hidden_layers. Text models further implement:
vocab_size.
PretrainedConfig
[[autodoc]] PretrainedConfig
- push_to_hub
- all |
Logging
馃 Transformers has a centralized logging system, so that you can setup the verbosity of the library easily.
Currently the default verbosity of the library is WARNING.
To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
to the INFO level.
thon
import transformers
transformers.logging.set_verbosity_info() |
You can also use the environment variable TRANSFORMERS_VERBOSITY to override the default verbosity. You can set it
to one of the following: debug, info, warning, error, critical. For example:
TRANSFORMERS_VERBOSITY=error ./myprogram.py
Additionally, some warnings can be disabled by setting the environment variable
TRANSFORMERS_NO_ADVISORY_WARNINGS to a true value, like 1. This will disable any warning that is logged using
[logger.warning_advice]. For example: |
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
Here is an example of how to use the same logger as the library in your own module or script:
thon
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("transformers")
logger.info("INFO")
logger.warning("WARN") |
All the methods of this logging module are documented below, the main ones are
[logging.get_verbosity] to get the current level of verbosity in the logger and
[logging.set_verbosity] to set the verbosity to the level of your choice. In order (from the least
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are: |
transformers.logging.CRITICAL or transformers.logging.FATAL (int value, 50): only report the most
critical errors.
transformers.logging.ERROR (int value, 40): only report errors.
transformers.logging.WARNING or transformers.logging.WARN (int value, 30): only reports error and
warnings. This the default level used by the library.
transformers.logging.INFO (int value, 20): reports error, warnings and basic information.
transformers.logging.DEBUG (int value, 10): report all information. |
By default, tqdm progress bars will be displayed during model download. [logging.disable_progress_bar] and [logging.enable_progress_bar] can be used to suppress or unsuppress this behavior.
logging vs warnings
Python has two logging systems that are often used in conjunction: logging, which is explained above, and warnings,
which allows further classification of warnings in specific buckets, e.g., FutureWarning for a feature or path
that has already been deprecated and DeprecationWarning to indicate an upcoming deprecation.
We use both in the transformers library. We leverage and adapt logging's captureWarning method to allow
management of these warning messages by the verbosity setters above.
What does that mean for developers of the library? We should respect the following heuristic:
- warnings should be favored for developers of the library and libraries dependent on transformers
- logging should be used for end-users of the library using it in every-day projects
See reference of the captureWarnings method below.
[[autodoc]] logging.captureWarnings
Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar |
Image Processor
An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks.
ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
- from_pretrained
- save_pretrained
BatchFeature
[[autodoc]] BatchFeature
BaseImageProcessor
[[autodoc]] image_processing_utils.BaseImageProcessor |
Callbacks
Callbacks are objects that can customize the behavior of the training loop in the PyTorch
[Trainer] (this feature is not yet implemented in TensorFlow) that can inspect the training loop
state (for progress reporting, logging on TensorBoard or other ML platforms) and take decisions (like early
stopping).
Callbacks are "read only" pieces of code, apart from the [TrainerControl] object they return, they
cannot change anything in the training loop. For customizations that require changes in the training loop, you should
subclass [Trainer] and override the methods you need (see trainer for examples).
By default, TrainingArguments.report_to is set to "all", so a [Trainer] will use the following callbacks. |
[DefaultFlowCallback] which handles the default behavior for logging, saving and evaluation.
[PrinterCallback] or [ProgressCallback] to display progress and print the
logs (the first one is used if you deactivate tqdm through the [TrainingArguments], otherwise
it's the second one).
[~integrations.TensorBoardCallback] if tensorboard is accessible (either through PyTorch >= 1.4
or tensorboardX).
[~integrations.WandbCallback] if wandb is installed.
[~integrations.CometCallback] if comet_ml is installed.
[~integrations.MLflowCallback] if mlflow is installed.
[~integrations.NeptuneCallback] if neptune is installed.
[~integrations.AzureMLCallback] if azureml-sdk is
installed.
[~integrations.CodeCarbonCallback] if codecarbon is
installed.
[~integrations.ClearMLCallback] if clearml is installed.
[~integrations.DagsHubCallback] if dagshub is installed.
[~integrations.FlyteCallback] if flyte is installed.
[~integrations.DVCLiveCallback] if dvclive is installed. |
If a package is installed but you don't wish to use the accompanying integration, you can change TrainingArguments.report_to to a list of just those integrations you want to use (e.g. ["azure_ml", "wandb"]).
The main class that implements callbacks is [TrainerCallback]. It gets the
[TrainingArguments] used to instantiate the [Trainer], can access that
Trainer's internal state via [TrainerState], and can take some actions on the training loop via
[TrainerControl].
Available Callbacks
Here is the list of the available [TrainerCallback] in the library:
[[autodoc]] integrations.CometCallback
- setup
[[autodoc]] DefaultFlowCallback
[[autodoc]] PrinterCallback
[[autodoc]] ProgressCallback
[[autodoc]] EarlyStoppingCallback
[[autodoc]] integrations.TensorBoardCallback
[[autodoc]] integrations.WandbCallback
- setup
[[autodoc]] integrations.MLflowCallback
- setup
[[autodoc]] integrations.AzureMLCallback
[[autodoc]] integrations.CodeCarbonCallback
[[autodoc]] integrations.NeptuneCallback
[[autodoc]] integrations.ClearMLCallback
[[autodoc]] integrations.DagsHubCallback
[[autodoc]] integrations.FlyteCallback
[[autodoc]] integrations.DVCLiveCallback
- setup
TrainerCallback
[[autodoc]] TrainerCallback
Here is an example of how to register a custom callback with the PyTorch [Trainer]:
thon
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training"
def on_train_begin(self, args, state, control, **kwargs):
print("Starting training") |
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback())
)
Another way to register a callback is to call trainer.add_callback() as follows:
thon
trainer = Trainer()
trainer.add_callback(MyCallback)
Alternatively, we can pass an instance of the callback class
trainer.add_callback(MyCallback()) |
TrainerState
[[autodoc]] TrainerState
TrainerControl
[[autodoc]] TrainerControl |
Backbone
A backbone is a model used for feature extraction for higher level computer vision tasks such as object detection and image classification. Transformers provides an [AutoBackbone] class for initializing a Transformers backbone from pretrained model weights, and two utility classes: |
[~utils.BackboneMixin] enables initializing a backbone from Transformers or timm and includes functions for returning the output features and indices.
[~utils.BackboneConfigMixin] sets the output features and indices of the backbone configuration.
timm models are loaded with the [TimmBackbone] and [TimmBackboneConfig] classes.
Backbones are supported for the following models:
BEiT
BiT
ConvNet
ConvNextV2
DiNAT
DINOV2
FocalNet
MaskFormer
NAT
ResNet
Swin Transformer
Swin Transformer v2
ViTDet |
BEiT
BiT
ConvNet
ConvNextV2
DiNAT
DINOV2
FocalNet
MaskFormer
NAT
ResNet
Swin Transformer
Swin Transformer v2
ViTDet
AutoBackbone
[[autodoc]] AutoBackbone
BackboneMixin
[[autodoc]] utils.BackboneMixin
BackboneConfigMixin
[[autodoc]] utils.BackboneConfigMixin
TimmBackbone
[[autodoc]] models.timm_backbone.TimmBackbone
TimmBackboneConfig
[[autodoc]] models.timm_backbone.TimmBackboneConfig |
Quantization
Quantization techniques reduces memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.
Quantization techniques that aren't supported in Transformers can be added with the [HfQuantizer] class. |
Learn how to quantize models in the Quantization guide.
AqlmConfig
[[autodoc]] AqlmConfig
AwqConfig
[[autodoc]] AwqConfig
GPTQConfig
[[autodoc]] GPTQConfig
BitsAndBytesConfig
[[autodoc]] BitsAndBytesConfig
HfQuantizer
[[autodoc]] quantizers.base.HfQuantizer |
Exporting 馃 Transformers models to ONNX
馃 Transformers provides a transformers.onnx package that enables you to
convert model checkpoints to an ONNX graph by leveraging configuration objects.
See the guide on exporting 馃 Transformers models for more
details.
ONNX Configurations
We provide three abstract classes that you should inherit from, depending on the
type of model architecture you wish to export: |
Encoder-based models inherit from [~onnx.config.OnnxConfig]
Decoder-based models inherit from [~onnx.config.OnnxConfigWithPast]
Encoder-decoder models inherit from [~onnx.config.OnnxSeq2SeqConfigWithPast] |
OnnxConfig
[[autodoc]] onnx.config.OnnxConfig
OnnxConfigWithPast
[[autodoc]] onnx.config.OnnxConfigWithPast
OnnxSeq2SeqConfigWithPast
[[autodoc]] onnx.config.OnnxSeq2SeqConfigWithPast
ONNX Features
Each ONNX configuration is associated with a set of features that enable you
to export models for different types of topologies or tasks.
FeaturesManager
[[autodoc]] onnx.features.FeaturesManager |