Optimization

🤗 Optimum Intel provides an optimum.openvino package that enables you to apply quantization on many model hosted on the 🤗 hub using the NNCF framework.

Post-training optimization

Post-training static quantization introduces an additional calibration step where data is fed through the network in order to compute the activations quantization parameters. Here is how to apply static quantization on a fine-tuned DistilBERT:

from functools import partial
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from optimum.intel.openvino import OVConfig, OVQuantizer

model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# The directory where the quantized model will be saved
save_dir = "ptq_model"

def preprocess_function(examples, tokenizer):
    return tokenizer(examples["sentence"], padding="max_length", max_length=128, truncation=True)

# Load the default quantization configuration detailing the quantization we wish to apply
quantization_config = OVConfig()
# Instantiate our OVQuantizer using the desired configuration
quantizer = OVQuantizer.from_pretrained(model)
# Create the calibration dataset used to perform static quantization
calibration_dataset = quantizer.get_calibration_dataset(
    "glue",
    dataset_config_name="sst2",
    preprocess_function=partial(preprocess_function, tokenizer=tokenizer),
    num_samples=300,
    dataset_split="train",
)
# Apply static quantization and export the resulting quantized model to OpenVINO IR format
quantizer.quantize(
    quantization_config=quantization_config,
    calibration_dataset=calibration_dataset,
    save_directory=save_dir,
)
# Save the tokenizer
tokenizer.save_pretrained(save_dir)

The quantize() method applies post-training static quantization and export the resulting quantized model to the OpenVINO Intermediate Representation (IR). The resulting graph is represented with two files: an XML file describing the network topology and a binary file describing the weights. The resulting model can be run on any target Intel device.

While training optimization

Quantization aware training (QAT) can also be applied in order to simulate the effects of quantization during training, in order to alleviate its effects on the model’s accuracy. In the case where post-training quantization results in an import accuracy degradation, QAT can be used instead. Here is an example on how to fine-tune a DistilBERT on the sst-2 task while applying quantization aware training (QAT).

import evaluate
import numpy as np
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, default_data_collator

model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# The directory where the quantized model will be saved
save_dir = "qat_model"
dataset = load_dataset("glue", "sst2")
dataset = dataset.map(
    lambda examples: tokenizer(examples["sentence"], padding=True, truncation=True, max_length=128), batched=True
)
metric = evaluate.load("glue", "sst2")
compute_metrics = lambda p: metric.compute(predictions=np.argmax(p.predictions, axis=1), references=p.label_ids)
-from transformers import Trainer
+from optimum.intel.openvino.trainer import OVConfig, OVTrainer

# Load the default quantization configuration detailing the quantization we wish to apply
+ov_config = OVConfig()

-trainer = Trainer(
+trainer = OVTrainer(
    model=model,
    args=TrainingArguments(save_dir, num_train_epochs=1.0, do_train=True, do_eval=True),
    train_dataset=dataset["train"].select(range(300)),
    eval_dataset=dataset["validation"],
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=default_data_collator,
+   ov_config=ov_config,
+   feature="sequence-classification",
)

# Train the model while applying quantization
train_result = trainer.train()
metrics = trainer.evaluate()
# Export the quantized model to OpenVINO IR format and save it
trainer.save_model()

Inference with Transformers pipeline

After applying quantization on our model, we can then easily load it with our OVModelForXxx classes and perform inference with OpenVINO Runtime using the Transformers pipelines.

from transformers import pipeline
from optimum.intel.openvino import OVModelForSequenceClassification

ov_model = OVModelForSequenceClassification.from_pretrained(save_dir)
cls_pipe = pipeline("text-classification", model=ov_model, tokenizer=tokenizer)
text = "He's a dreadful magician."
outputs = cls_pipe(text)

[{'label': 'NEGATIVE', 'score': 0.9840195178985596}]