Transformers documentation

Export to TFLite

You are viewing v4.39.3 version. A newer version v4.46.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Export to TFLite

TensorFlow Lite is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the .tflite file extension.

🤗 Optimum offers functionality to export 🤗 Transformers models to TFLite through the exporters.tflite module. For the list of supported model architectures, please refer to 🤗 Optimum documentation.

To export a model to TFLite, install the required dependencies:

pip install optimum[exporters-tf]

To check out all available arguments, refer to the 🤗 Optimum docs, or view help in command line:

optimum-cli export tflite --help

To export a model’s checkpoint from the 🤗 Hub, for example, google-bert/bert-base-uncased, run the following command:

optimum-cli export tflite --model google-bert/bert-base-uncased --sequence_length 128 bert_tflite/

You should see the logs indicating progress and showing where the resulting model.tflite is saved, like this:

Validating TFLite model...
	-[✓] TFLite model output names match reference model (logits)
	- Validating TFLite Model output "logits":
		-[✓] (1, 128, 30522) matches (1, 128, 30522)
		-[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05)
The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05:
- logits: max diff = 5.817413330078125e-05.
 The exported model was saved at: bert_tflite

The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model’s weights and tokenizer files in the same directory (local_path). When using CLI, pass the local_path to the model argument instead of the checkpoint name on 🤗 Hub.