Edit model card

Dynamically quantized Albert base finetuned MPRC

Table of Contents

Model Details

Model Description: This model is a Albert fine-tuned on MPRC dynamically quantized with optimum-intel through the usage of huggingface/optimum-intel through the usage of Intel® Neural Compressor.

  • Model Type: Text Classification
  • Language(s): English
  • License: Apache-2.0
  • Parent Model: For more details on the original model, we encourage users to check out this model card.

How to Get Started With the Model


To load the quantized model, you can do as follows:

from optimum.intel import INCModelForSequenceClassification

model = INCModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-MRPC-int8")

Test result

Accuracy (eval-f1) 0.9193 0.9263
Model size (MB) 45.0 46.7
Downloads last month
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Intel/albert-base-v2-MRPC-int8-inc

Collection including Intel/albert-base-v2-MRPC-int8-inc