This is an INT4 quantized version of the meta-llama/Llama-2-13b-chat-hf model. The Python packages used in creating this model are as follows:

openvino==2024.5.0rc1
optimum==1.23.3
optimum-intel==1.20.1
nncf==2.13.0
torch==2.5.1
transformers==4.46.2

This quantized model is created using the following command:

optimum-cli export openvino --model "meta-llama/Llama-2-13b-chat-hf" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./Llama-2-13b-chat-hf-ov-int4

For more details, run the following command from your Python environment: optimum-cli export openvino --help

INFO:nncf:Statistics of the bitwidth distribution:

Num bits (N) % all parameters (layers) % ratio-defining parameters (layers)
4 100% (282 / 282) 100% (282 / 282)
Downloads last month
53
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including jojo1899/Llama-2-13b-chat-hf-ov-int4