# Dynamically quantized DistilBERT base uncased finetuned SST-2

## Model Details

Model Description: This model is a DistilBERT fine-tuned on SST-2 dynamically quantized with optimum-intel through the usage of Intel® Neural Compressor.

• Model Type: Text Classification
• Language(s): English
• Parent Model: For more details on the original model, we encourage users to check out this model card.

## How to Get Started With the Model

To load the quantized model, you can do as follows:

from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification

model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic")