Edit model card

INT8 bart-large-mrpc

Post-training dynamic quantization

This is an INT8 PyTorch model quantized with huggingface/optimum-intel through the usage of Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model bart-large-mrpc.

Test result

INT8 FP32
Accuracy (eval-f1) 0.9051 0.9120
Model size (MB) 547 1556.48

Load with optimum:

from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
    'Intel/bart-large-mrpc-int8-dynamic',
)
Downloads last month
4
Hosted inference API
Text Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.

Dataset used to train Intel/bart-large-mrpc-int8-dynamic

Evaluation results