File size: 981 Bytes
9226a0e 2e3653e 9226a0e 9fbf3ab 9226a0e 9fbf3ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
license: llama3.1
tags:
- openvino
- int4
---
This is an INT4 quantized version of the `meta-llama/Llama-3.1-8B-Instruct` model. The Python packages used in creating this model are as follows:
```
openvino==2025.0.0
optimum==1.24.0
optimum-intel==1.22.0
nncf==2.15.0
torch==2.6.0
transformers==4.48.3
```
This quantized model is created using the following command:
```
optimum-cli export openvino --model "meta-llama/Llama-3.1-8B-Instruct" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./llama-3_1-8b-instruct-ov-int4
```
For more details, run the following command from your Python environment: `optimum-cli export openvino --help`
INFO:nncf:Statistics of the bitwidth distribution:
| Num bits (N) | % all parameters (layers) | % ratio-defining parameters (layers) |
|----------------|-----------------------------|----------------------------------------|
| 4 | 100% (226 / 226) | 100% (226 / 226) | |