--- language: - en license: apache-2.0 tags: - text-classfication - int8 - PostTrainingDynamic datasets: - glue metrics: - f1 model-index: - name: bart-large-mrpc-int8-static results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: F1 type: f1 value: 0.9050847457627118 --- # INT8 bart-large-mrpc ### Post-training dynamic quantization This is an INT8 PyTorch model quantized with [IntelĀ® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [bart-large-mrpc](https://huggingface.co/Intel/bart-large-mrpc). ### Test result - Batch size = 8 - [Amazon Web Services](https://aws.amazon.com/) c6i.xlarge (Intel ICE Lake: 4 vCPUs, 8g Memory) instance. | |INT8|FP32| |---|:---:|:---:| | **Throughput (samples/sec)** |6.529|3.261| | **Accuracy (eval-f1)** |0.9051|0.9120| | **Model size (MB)** |547|1556.48| ### Load with IntelĀ® Neural Compressor (build from source): ```python from neural_compressor.utils.load_huggingface import OptimizedModel int8_model = OptimizedModel.from_pretrained( 'Intel/bart-large-mrpc-int8-dynamic', ) ``` Notes: - The INT8 model has better performance than the FP32 model when the CPU is fully occupied. Otherwise, there will be the illusion that INT8 is inferior to FP32.