Edit model card

Dynamically quantized DistilBERT base uncased finetuned SST-2

Table of Contents

Model Details

Model Description: This model is a DistilBERT fine-tuned on SST-2 dynamically quantized with optimum-intel through the usage of Intel® Neural Compressor.

  • Model Type: Text Classification
  • Language(s): English
  • License: Apache-2.0
  • Parent Model: For more details on the original model, we encourage users to check out this model card.

How to Get Started With the Model

To load the quantized model, you can do as follows:

from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification

model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic")
Downloads last month
47
Hosted inference API
Text Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.

Datasets used to train Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic