AWS Trainium & Inferentia documentation

Inferentia Exporter

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Inferentia Exporter

You can export a PyTorch model to Neuron with 🤗 Optimum to run inference on AWS Inferntia 1 and Inferentia 2.

Export functions

There is an export function for each generation of the Inferentia accelerator, export_neuron for INF1 and export_neuronx on INF2, but you will be able to use directly the export function export, which will select the proper exporting function according to the environment.

Besides, you can check if the exported model is valid via validate_model_outputs, which compares the compiled model’s output on Neuron devices to the PyTorch model’s output on CPU.