The ORTTrainer
and ORTSeq2SeqTrainer
classes provide APIs for training PyTorch models with ONNX Runtime (ORT).
Taking ONNX Runtime as backend, ORTTrainer
and ORTSeq2SeqTrainer
optimize the computation graph and the memory usage. They also support
mixed precision training implemented by ORT, as well as distributed training on multiple GPUs. With them, you will be able to achieve
lower latency, higher throughput, and larger maximum batch size while training large transformers models.
To use ONNX Runtime for training, you need a machine with at least one NVIDIA or AMD GPU.
To use ORTTrainer
or ORTSeq2SeqTrainer
, you need to install ONNX Runtime Training module and Optimum.
To set up the environment, we strongly recommend you install the dependencies with Docker to ensure that the versions are correct and well configured. You can find dockerfiles with various combinations here.
For example, if you want to install onnxruntime-training 1.12.0 via Dockerfile:
docker build -f Dockerfile-ort1.12.0-cu113 -t <imagename:tag> .
If you want to install the dependencies beyond in a local Python environment. You can pip install them once you have CUDA 11.3 and cuDNN 8 well installed.
pip install onnx==1.12.0 ninja pip install onnxruntime-training==1.12.0+cu113 -f https://download.onnxruntime.ai/onnxruntime_stable_cu113.html pip install torch==1.12.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html pip install torch-ort pip install transformers datasets accelerate pip install --upgrade protobuf==3.20.1
And run post-installation configuration:
python -m torch_ort.configure
You can install Optimum via pypi:
pip install optimum
Or install from source:
pip install git+https://github.com/huggingface/optimum.git
This command installs the current main dev version of Optimum, which could include latest developments(new features, bug fixes). However, the main version might not be very stable. If you run into any problem, please open an issue so that we can fix it as soon as possible.
The ORTTrainer
class inherits the Trainer
of Transformers. You can easily adapt the codes by replacing Trainer
of transformers with ORTTrainer
to take advantage of the acceleration
empowered by ONNX Runtime. Here is an example of how to use ORTTrainer
compared with Trainer
:
-from transformers import Trainer, TrainingArguments
+from optimum.onnxruntime import ORTTrainer, ORTTrainingArguments
# Step 1: Define training arguments
-training_args = TrainingArguments(
+training_args = ORTTrainingArguments(
output_dir="path/to/save/folder/",
optim-"adamw_ort_fused",
...
)
# Step 2: Create your ONNX Runtime Trainer
-trainer = Trainer(
+trainer = ORTTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
+ feature="sequence-classification",
...
)
# Step 3: Use ONNX Runtime for training!🤗
trainer.train()
Check out more detailed example scripts in the optimum repository.
The ORTSeq2SeqTrainer
class is similar to the Seq2SeqTrainer
of Transformers. You can easily adapt the codes by replacing Seq2SeqTrainer
of transformers with ORTSeq2SeqTrainer
to take advantage of the acceleration
empowered by ONNX Runtime. Here is an example of how to use ORTSeq2SeqTrainer
compared with Seq2SeqTrainer
:
-from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments
+from optimum.onnxruntime import ORTSeq2SeqTrainer, ORTSeq2SeqTrainingArguments
# Step 1: Define training arguments
-training_args = Seq2SeqTrainingArguments(
+training_args = ORTSeq2SeqTrainingArguments(
output_dir="path/to/save/folder/",
optim-"adamw_ort_fused",
...
)
# Step 2: Create your ONNX Runtime Seq2SeqTrainer
-trainer = Seq2SeqTrainer(
+trainer = ORTSeq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
+ feature="seq2seq-lm",
...
)
# Step 3: Use ONNX Runtime for training!🤗
trainer.train()
Check out more detailed example scripts in the optimum repository.
The ORTTrainingArguments
class inherits the TrainingArguments
class in Transformers. Besides the optimizers implemented in Transformers, it allows you to use the optimizers implemented in ONNX Runtime.
Replace Seq2SeqTrainingArguments
with ORTSeq2SeqTrainingArguments
:
-from transformers import TrainingArguments
+from optimum.onnxruntime import ORTTrainingArguments
-training_args = TrainingArguments(
+training_args = ORTTrainingArguments(
output_dir=tmp_dir,
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
logging_dir=tmp_dir,
optim="adamw_ort_fused", # Fused Adam optimizer implemented by ORT
)
DeepSpeed is supported by ONNX Runtime(only ZeRO stage 1 and 2 for the moment). You can find some DeepSpeed configuration examples in the Optimum repository.
The ORTSeq2SeqTrainingArguments
class inherits the Seq2SeqTrainingArguments
class in Transformers. Besides the optimizers implemented in Transformers, it allows you to use the optimizers implemented in ONNX Runtime.
Replace Seq2SeqTrainingArguments
with ORTSeq2SeqTrainingArguments
:
-from transformers import Seq2SeqTrainingArguments
+from optimum.onnxruntime import ORTSeq2SeqTrainingArguments
-training_args = Seq2SeqTrainingArguments(
+training_args = ORTSeq2SeqTrainingArguments(
output_dir=tmp_dir,
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
logging_dir=tmp_dir,
optim="adamw_ort_fused", # Fused Adam optimizer implemented by ORT
)
DeepSpeed is supported by ONNX Runtime(only ZeRO stage 1 and 2 for the moment). You can find some DeepSpeed configuration examples in the Optimum repository.
If you have any problems or questions regarding ORTTrainer
, please file an issue with Optimum Github
or discuss with us on HuggingFace’s community forum, cheers 🤗 !