The TensorFlow Model Optimization Toolkit is a suite of tools for optimizing machine learning models for deployment. The TensorFlow Lite post-training quantization tool enable users to convert weights to 8 bit precision which reduces the trained model size by about 4 times. The tools also include API for pruning and quantization during training is post-training quantization is insufficient. These help user to reduce latency and inference cost, deploy models to edge devices with restricted resources and optimized execution for existing hardware or new special purpose accelerators
The Tensorflow Model Optimization Toolkit is available as a pip package, tensorflow-model-optimization
. To install the package, run the following command:
pip install -U tensorflow-model-optimization
For a hands-on guide on how to use the Tensorflow Model Optimization Toolkit, refer this notebook
For optimizing model, pyTorch supports INT8 quantization compared to typical FP32 models which leads to 4x reduction in the model size and a 4x reduction in memory bandwidth requirements. PyTorch supports multiple approaches to quantizing a deep learning model which are as follows:
For more details on quantization in PyTorch, see here
Pytorch quantization is available as API in the pytorch package. To use it simple install pytorch and import the quantization API as follows:
pip install torch
import torch.quantization
For a hands-on guide on how to use the Pytorch Quantization, refer this [notebook](https://colab.research.google.com/drive/1toyS6IUsFvjuSK71oeLZZ51mm8hVnlZv
ONNX Runtime is a cross-platform machine-learning model accelerator, with a flexible interface to integrate hardware-specific libraries. ONNX Runtime can be used with models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks. The benefits of using ONNX Runtime for Inferencing are as follows:
For more details on ONNX Runtime, see here
ONNX Runtime has 2 python package and only one of these packages should be installed at a time in any one environment. Use the GPU package if you want to use ONNX Runtime with GPU support. The python package for ONNX Runtime is available as a pip package. To install the package, run the following command:
pip install onnxruntime
For GPU version, run the following command:
pip install onnxruntime-gpu
For a hands-on guide on how to use the ONNX Runtime, refer this notebook
NVIDIA® TensorRT™ is an SDK for optimizing trained deep learning models to enable high-performance inference. TensorRT contains a deep learning inference optimizer for trained deep learning models, and a runtime for execution. After user have trained their deep learning model in a framework of their choice, TensorRT enables user to run it with higher throughput and lower latency.
TensorRT is available as a pip package, tensorrt
. To install the package, run the following command:
pip install tensorrt
for other installation methods, see here
For a hands-on guide on how to use the TensorRT, refer this notebook
The OpenVINO™ toolkit enables user to optimize a deep learning model from almost any framework and deploy it with best-in-class performance on a range of Intel® processors and other hardware platforms. The benefits of using OpenVINO includes:
Openvino is available as a pip package, openvino
. To install the package, run the following command:
pip install openvino
For other installation methods, see here
For a hands-on guide on how to use the OpenVINO, refer this notebook
Optimum serves as an extension of Transformers, offering a suite of tools designed for optimizing performance in training and running models on specific hardware, ensuring maximum efficiency. In the rapidly evolving AI landscape, specialized hardware and unique optimizations continue to emerge regularly. Optimum empowers developers to seamlessly leverage these diverse platforms, maintaining the ease of use inherent in Transformers. Platforms suppoerted by optimum as of now are:
Optimum is available as a pip package, optimum
. To install the package, run the following command:
pip install optimum
For installation of accelerator-specific features, see here
For a hands-on guide on how to use Optimum for quantization, refer this notebook
Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. It delivers high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge. The benefits of using EdgeTPU includes:
For more details on EdgeTPU, see here
For guide on how to setup and use EdgeTPU, refer this notebook