Post
1324
Interested in performing inference with an ONNX model?β‘οΈ
The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler!
You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows:
Check out the whole guide π https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models
The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler!
You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows:
from optimum.onnxruntime import ORTModelForSequenceClassification
# Load the model from the hub and export it to the ONNX format
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)
Check out the whole guide π https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models