File size: 1,750 Bytes
8bc14a9 d0444a9 8bc14a9 e02999f 8bc14a9 14a60c2 8bc14a9 a87fd16 e02999f 0936cdd c3dbcf9 d0444a9 8bc14a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
tags:
- optimum
datasets:
- banking77
metrics:
- accuracy
model-index:
- name: quantized-distilbert-banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 0.9224
---
# Quantized-distilbert-banking77
This model is a statically quantized version of [optimum/distilbert-base-uncased-finetuned-banking77](https://huggingface.co/optimum/distilbert-base-uncased-finetuned-banking77) on the `banking77` dataset.
The model was created using the [optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization) notebook.
It achieves the following results on the evaluation set:
**Accuracy**
- Vanilla model: 92.5%
- Quantized model: 92.24%
> The quantized model achieves 99.72% accuracy of the fp32 model
**Latency**
Payload sequence length: 128
Instance type: AWS c6i.xlarge
| latency | vanilla transformers | quantized optimum model | improvement |
|---------|----------------------|-------------------------|-------------|
| p95 | 75.69ms | 26.75ms | 2.83x |
| avg | 57.52ms | 24.86ms | 2.31x |
## How to use
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import pipeline, AutoTokenizer
model = ORTModelForSequenceClassification.from_pretrained("philschmid/quantized-distilbert-banking77")
tokenizer = AutoTokenizer.from_pretrained("philschmid/quantized-distilbert-banking77")
remote_clx = pipeline("text-classification",model=model, tokenizer=tokenizer)
remote_clx("What is the exchange rate like on this app?")
``` |