metadata
language: en
license: mit
datasets:
- glue
- mrpc
metrics:
- f1
tags:
- text-classfication
- nlp
- neural-compressor
- PostTrainingsDynamic
- int8
- Intel® Neural Compressor
- albert
Dynamically quantized Albert base finetuned MPRC
Table of Contents
Model Details
Model Description: This model is a Albert fine-tuned on MPRC dynamically quantized with optimum-intel through the usage of huggingface/optimum-intel through the usage of Intel® Neural Compressor.
- Model Type: Text Classification
- Language(s): English
- License: Apache-2.0
- Parent Model: For more details on the original model, we encourage users to check out this model card.
How to Get Started With the Model
PyTorch
To load the quantized model, you can do as follows:
from optimum.intel import INCModelForSequenceClassification
model = INCModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-MRPC-int8")
Test result
INT8 | FP32 | |
---|---|---|
Accuracy (eval-f1) | 0.9193 | 0.9263 |
Model size (MB) | 45.0 | 46.7 |