jojo1899's picture
Improved quantization using Openvino 2024.5.0rc1
7d93721
---
license: mit
tags:
- openvino
- int4
---
This is an INT4 quantized version of the `Phi-3-mini-128k-instruct` model. The Python packages used in creating this model are as follows:
```
openvino==2024.5.0rc1
optimum==1.23.3
optimum-intel==1.20.1
nncf==2.13.0
torch==2.5.1
transformers==4.46.2
```
This quantized model is created using the following command:
```
optimum-cli export openvino --model "microsoft/Phi-3-mini-128k-instruct" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./Phi-3-mini-128k-instruct-ov-int4
```
For more details, run the following command from your Python environment: `optimum-cli export openvino --help`
INFO:nncf:Statistics of the bitwidth distribution:
| Num bits (N) | % all parameters (layers) | % ratio-defining parameters (layers) |
|----------------|-----------------------------|----------------------------------------|
| 4 | 100% (130 / 130) | 100% (130 / 130) |