( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None feature: str = 'default' args: ORTTrainingArguments = None data_collator: typing.Optional[DataCollator] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None model_init: typing.Callable[[], transformers.modeling_utils.PreTrainedModel] = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor] = None onnx_model_path: typing.Union[str, os.PathLike] = None )
Parameters
torch.nn.Module
, optional) —
The model to train, evaluate or use for predictions. If not provided, a model_init
must be passed.
ORTTrainer
is optimized to work with the PreTrainedModel provided by the transformers library.
You can still use your own models defined as torch.nn.Module
for training with ONNX Runtime backend
and inference with PyTorch backend as long as they work the same way as the 🤗 Transformers models.
ORTTrainingArguments
, optional) —
The arguments to tweak for training. Will default to a basic instance of ORTTrainingArguments
with the
output_dir
set to a directory named tmp_trainer in the current directory if not provided.
DataCollator
, optional) —
The function to use to form a batch from a list of elements of train_dataset
or eval_dataset
. Will
default to default_data_collator if no tokenizer
is provided, an instance of
DataCollatorWithPadding otherwise.
torch.utils.data.Dataset
or torch.utils.data.IterableDataset
, optional) —
The dataset to use for training. If it is a Dataset, columns not accepted by the
model.forward()
method are automatically removed.
Note that if it’s a torch.utils.data.IterableDataset
with some randomization and you are training in a
distributed fashion, your iterable dataset should either use a internal attribute generator
that is a
torch.Generator
for the randomization that must be identical on all processes (and the ORTTrainer will
manually set the seed of this generator
at each epoch) or have a set_epoch()
method that internally
sets the seed of the RNGs used.
torch.utils.data.Dataset
, Dict[str, torch.utils.data.Dataset
]), optional) —
The dataset to use for evaluation. If it is a Dataset, columns not accepted by the
model.forward()
method are automatically removed. If it is a dictionary, it will evaluate on each
dataset prepending the dictionary key to the metric name.
Callable[[], PreTrainedModel]
, optional) —
A function that instantiates the model to be used. If provided, each call to ORTTrainer.train
will start
from a new instance of the model as given by this function.
The function may have zero argument, or a single one containing the optuna/Ray Tune/SigOpt trial object, to
be able to choose different architectures according to hyper parameters (such as layer count, sizes of
inner layers, dropout probabilities etc).
Callable[[EvalPrediction], Dict]
, optional) —
The function that will be used to compute metrics at evaluation. Must take a EvalPrediction
and return
a dictionary string to metric values.
TrainerCallback
, optional) —
A list of callbacks to customize the training loop. Will add those to the list of default callbacks
detailed in here.
If you want to remove one of the default callbacks used, use the ORTTrainer.remove_callback
method.
Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
, optional) — A tuple
containing the optimizer and the scheduler to use. Will default to an instance of AdamW
on your model
and a scheduler given by get_linear_schedule_with_warmup
controlled by args
.
Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
, optional) —
A function that preprocess the logits right before caching them at each evaluation step. Must take two
tensors, the logits and the labels, and return the logits once processed as desired. The modifications made
by this function will be reflected in the predictions received by compute_metrics
.
Note that the labels (second parameter) will be None
if the dataset does not have them.
ORTTrainer is a simple but feature-complete training and eval loop for ONNX Runtime, optimized for 🤗 Transformers.
Important attributes:
DeepSpeed
,
the inner model is first wrapped in ORTModule
and then in DeepSpeed
and then again in
torch.nn.DistributedDataParallel
. If the inner model hasn’t been wrapped, then self.model_wrapped
is the
same as self.model
.False
if model parallel or deepspeed is used, or if the default
ORTTrainingArguments.place_model_on_device
is overridden to return False
.train
(e.g. when evaluate
is called while
in train
)How the loss is computed by ORTTrainer. By default, all models return the loss in the first element. Subclass and override for custom behavior.
Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
ORTTrainer’s init through optimizers
, or subclass and override this method in a subclass.
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' inference_with_ort: bool = False )
Parameters
Dataset
, optional) —
Pass a dataset if you wish to override self.eval_dataset
. If it is a Dataset, columns
not accepted by the model.forward()
method are automatically removed. It must implement the __len__
method.
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "eval"
) —
An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named
“eval_bleu” if the prefix is “eval” (default)
Run evaluation with ONNX Runtime or PyTorch backend and returns metrics.
( dataloader: DataLoader description: str prediction_loss_only: typing.Optional[bool] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Prediction/evaluation loop, shared by ORTTrainer.evaluate()
and ORTTrainer.predict()
.
Works both with or without labels.
( args: ORTTrainingArguments )
Returns the optimizer class and optimizer parameters implemented in ONNX Runtime based on ORTTrainingArguments
.
( test_dataset: Dataset ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'test' inference_with_ort: bool = False )
Parameters
Dataset
) —
Dataset to run the predictions on. If it is an datasets.Dataset
, columns not accepted by the
model.forward()
method are automatically removed. Has to implement the method __len__
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "test"
) —
An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named
“test_bleu” if the prefix is “test” (default)
Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in evaluate()
.
If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
np.ndarray
): The predictions on test_dataset
.np.ndarray
, optional): The labels (if the dataset contained some).Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained
labels).( dataloader: DataLoader description: str prediction_loss_only: typing.Optional[bool] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' )
Prediction/evaluation loop, shared by ORTTrainer.evaluate()
and ORTTrainer.predict()
.
Works both with or without labels.
( model: ORTModel inputs: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] prediction_loss_only: bool ignore_keys: typing.Optional[typing.List[str]] = None ) → Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
Parameters
ORTModel
) —
The model to evaluate.
Dict[str, Union[torch.Tensor, Any]]
) —
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument labels
. Check your model’s documentation for all accepted arguments.
bool
) —
Whether or not to return the loss only.
Lst[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
Returns
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]
A tuple with the loss, logits and labels (each being optional).
Perform an evaluation step on model
using inputs
.
( resume_from_checkpoint: typing.Union[bool, str, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = None ignore_keys_for_eval: typing.Optional[typing.List[str]] = None **kwargs )
Parameters
str
or bool
, optional) —
If a str
, local path to a saved checkpoint as saved by a previous instance of ORTTrainer
. If a
bool
and equals True
, load the last checkpoint in args.output_dir as saved by a previous instance
of ORTTrainer
. If present, training will resume from the model/optimizer/scheduler states loaded here.
optuna.Trial
or Dict[str, Any]
, optional) —
The trial run or the hyperparameter dictionary for hyperparameter search.
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions for evaluation during the training.
kwargs —
Additional keyword arguments used to hide deprecated arguments
Main entry point for training with ONNX Runtime accelerator.
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None feature: str = 'default' args: ORTTrainingArguments = None data_collator: typing.Optional[DataCollator] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None model_init: typing.Callable[[], transformers.modeling_utils.PreTrainedModel] = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor] = None onnx_model_path: typing.Union[str, os.PathLike] = None )
( eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' inference_with_ort: bool = False **gen_kwargs )
Parameters
Dataset
, optional) —
Pass a dataset if you wish to override self.eval_dataset
. If it is a Dataset, columns
not accepted by the model.forward()
method are automatically removed. It must implement the __len__
method.
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "eval"
) —
An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named
“eval_bleu” if the prefix is “eval” (default)
Run evaluation with ONNX Runtime or PyTorch backend and returns metrics.
( test_dataset: Dataset ignore_keys: typing.Optional[typing.List[str]] = None metric_key_prefix: str = 'eval' inference_with_ort: bool = False **gen_kwargs )
Parameters
Dataset
) —
Dataset to run the predictions on. If it is an datasets.Dataset
, columns not accepted by the
model.forward()
method are automatically removed. Has to implement the method __len__
List[str]
, optional) —
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
str
, optional, defaults to "test"
) —
An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named
“test_bleu” if the prefix is “test” (default)
Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in evaluate()
.
If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
Returns: NamedTuple A namedtuple with the following keys:
np.ndarray
): The predictions on test_dataset
.np.ndarray
, optional): The labels (if the dataset contained some).Dict[str, float]
, optional): The potential dictionary of metrics (if the dataset contained
labels).( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False evaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: typing.Optional[str] = 'passive' log_level_replica: typing.Optional[str] = 'passive' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: int = 500 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' save_steps: int = 500 save_total_limit: typing.Optional[int] = None save_on_each_node: bool = False no_cuda: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 xpu_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: str = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[int] = None dataloader_num_workers: int = 0 past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[typing.List[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False sharded_ddp: str = '' fsdp: str = '' fsdp_min_num_params: int = 0 fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None deepspeed: typing.Optional[str] = None label_smoothing_factor: float = 0.0 optim: typing.Optional[str] = 'adamw_hf' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Optional[typing.List[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None dataloader_pin_memory: bool = True skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: bool = False gradient_checkpointing: bool = False include_inputs_for_metrics: bool = False fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: typing.Optional[int] = 1800 )
Parameters
str
or training_args.ORTOptimizerNames
or transformers.training_args.OptimizerNames
, optional, defaults to "adamw_hf"
) —
The optimizer to use, including optimizers in Transformers: adamw_hf, adamw_torch, adamw_apex_fused, or adafactor. And optimizers implemented by ONNX Runtime: adamw_ort_fused.
( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False evaluation_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: typing.Optional[str] = 'passive' log_level_replica: typing.Optional[str] = 'passive' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: int = 500 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' save_steps: int = 500 save_total_limit: typing.Optional[int] = None save_on_each_node: bool = False no_cuda: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 xpu_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: str = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[int] = None dataloader_num_workers: int = 0 past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = True label_names: typing.Optional[typing.List[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False sharded_ddp: str = '' fsdp: str = '' fsdp_min_num_params: int = 0 fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None deepspeed: typing.Optional[str] = None label_smoothing_factor: float = 0.0 optim: typing.Optional[str] = 'adamw_hf' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Optional[typing.List[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None dataloader_pin_memory: bool = True skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: bool = False gradient_checkpointing: bool = False include_inputs_for_metrics: bool = False fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: typing.Optional[int] = 1800 sortish_sampler: bool = False predict_with_generate: bool = False generation_max_length: typing.Optional[int] = None generation_num_beams: typing.Optional[int] = None )
Parameters
str
or training_args.ORTOptimizerNames
or transformers.training_args.OptimizerNames
, optional, defaults to "adamw_hf"
) —
The optimizer to use, including optimizers in Transformers: adamw_hf, adamw_torch, adamw_apex_fused, or adafactor. And optimizers implemented by ONNX Runtime: adamw_ort_fused.