lvkaokao's picture
update README.md.
c7c2381
metadata
license: cc-by-4.0
tags:
  - int8
  - Intel® Neural Compressor
  - PostTrainingStatic
datasets:
  - squad2
metrics:
  - f1

INT8 RoBERT base finetuned on Squad2

Post-training static quantization

This is an INT8 PyTorch model quantized with Intel® Neural Compressor.

The original fp32 model comes from the fine-tuned model deepset/roberta-base-squad2.

The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104.

The linear modules roberta.encoder.layer.7.output.dense, roberta.encoder.layer.8.output.dense, roberta.encoder.layer.9.output.dense, fall back to fp32 for less than 1% relative accuracy loss.

Test result

INT8 FP32
Accuracy (eval-f1) 82.3122 82.9231
Model size (MB) 141 474

Load with Intel® Neural Compressor:

from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
    'Intel/roberta-base-squad2-int8-static',
)