Edit model card

INT8 BERT base uncased finetuned MRPC

Post-training static quantization

This is an INT8 PyTorch model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model Intel/bert-base-uncased-mrpc.

The calibration dataloader is the train dataloader. The calibration sampling size is 1000.

The linear module bert.encoder.layer.9.output.dense falls back to fp32 to meet the 1% relative accuracy loss.

Test result

Accuracy (eval-f1) 0.8959 0.9042
Model size (MB) 119 418

Load with Intel® Neural Compressor:

from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
Downloads last month
Hosted inference API
Text Classification
This model can be loaded on the Inference API on-demand.