Update model card
Browse files
README.md
CHANGED
@@ -8,6 +8,7 @@ metrics:
|
|
8 |
- accuracy
|
9 |
tags:
|
10 |
- text-classfication
|
|
|
11 |
- int8
|
12 |
---
|
13 |
|
@@ -26,10 +27,16 @@ tags:
|
|
26 |
|
27 |
## How to Get Started With the Model
|
28 |
|
29 |
-
To load the quantized model, you can do as follows:
|
30 |
|
31 |
```python
|
32 |
-
from
|
|
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
35 |
```
|
|
|
8 |
- accuracy
|
9 |
tags:
|
10 |
- text-classfication
|
11 |
+
- neural-compressor
|
12 |
- int8
|
13 |
---
|
14 |
|
|
|
27 |
|
28 |
## How to Get Started With the Model
|
29 |
|
30 |
+
To load the quantized model and run inference using the Transformers [pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines), you can do as follows:
|
31 |
|
32 |
```python
|
33 |
+
from transformers import AutoTokenizer, pipeline
|
34 |
+
from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification
|
35 |
|
36 |
+
model_id = "echarlaix/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic"
|
37 |
+
model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id)
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
39 |
+
cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
|
40 |
+
text = "He's a dreadful magician."
|
41 |
+
outputs = cls_pipe(text)
|
42 |
```
|