Edit model card

INT8 MiniLM-L12-H384-uncased-mrpc

Post-training dynamic quantization

ONNX

This is an INT8 ONNX model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model Intel/MiniLM-L12-H384-uncased-mrpc.

Test result

INT8 FP32
Accuracy (eval-f1) 0.9107 0.9097
Model size (MB) 33 128

Load ONNX model:

from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/MiniLM-L12-H384-uncased-mrpc-int8-dynamic')
Downloads last month
17
Hosted inference API
Text Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.