Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -7,7 +7,9 @@ license: apache-2.0
|
|
7 |
tags:
|
8 |
- exbert
|
9 |
- openvino
|
10 |
-
- openvino
|
|
|
|
|
11 |
co2_eq_emissions: 149200
|
12 |
model-index:
|
13 |
- name: distilgpt2
|
@@ -24,9 +26,9 @@ model-index:
|
|
24 |
name: Perplexity
|
25 |
---
|
26 |
|
27 |
-
This model is a quantized version of [`echarlaix/distilgpt2-openvino`](https://huggingface.co/echarlaix/distilgpt2-openvino) and
|
28 |
|
29 |
-
First make sure you have optimum-intel installed:
|
30 |
|
31 |
```bash
|
32 |
pip install optimum[openvino]
|
@@ -37,6 +39,6 @@ To load your model you can do as follows:
|
|
37 |
```python
|
38 |
from optimum.intel import OVModelForCausalLM
|
39 |
|
40 |
-
model_id = "echarlaix/distilgpt2-openvino-
|
41 |
model = OVModelForCausalLM.from_pretrained(model_id)
|
42 |
```
|
|
|
7 |
tags:
|
8 |
- exbert
|
9 |
- openvino
|
10 |
+
- openvino-export
|
11 |
+
- nncf
|
12 |
+
- 4-bit
|
13 |
co2_eq_emissions: 149200
|
14 |
model-index:
|
15 |
- name: distilgpt2
|
|
|
26 |
name: Perplexity
|
27 |
---
|
28 |
|
29 |
+
This model is a quantized version of [`echarlaix/distilgpt2-openvino`](https://huggingface.co/echarlaix/distilgpt2-openvino) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
|
30 |
|
31 |
+
First make sure you have `optimum-intel` installed:
|
32 |
|
33 |
```bash
|
34 |
pip install optimum[openvino]
|
|
|
39 |
```python
|
40 |
from optimum.intel import OVModelForCausalLM
|
41 |
|
42 |
+
model_id = "echarlaix/distilgpt2-openvino-4bit"
|
43 |
model = OVModelForCausalLM.from_pretrained(model_id)
|
44 |
```
|