Edit model card

This is an INT4 quantized version of the meta-llama/Llama-2-13b-chat-hf model. The Python packages used in creating this model are as follows:

openvino==2024.3.0.dev20240528
openvino-nightly==2024.3.0.dev20240528
openvino-tokenizers==2024.3.0.0.dev20240528
optimum==1.19.2
optimum-intel==1.17.0.dev0+aefabf0
nncf==2.11.0.dev0+90a7f0d5
torch==2.3.0+cu121
transformers==4.40.2

This quantized model is created using the following command:

optimum-cli export openvino -m "meta-llama/Llama-2-13b-chat-hf" --task text-generation-with-past --weight-format int4 --group-size 128 --trust-remote-code ./Llama-2-13b-chat-hf-ov-int4 

For more details, run the following command from your Python environment: optimum-cli export openvino --help

INFO:nncf:Statistics of the bitwidth distribution:

Num bits (N) % all parameters (layers) % ratio-defining parameters (layers)
8 22% (83 / 282) 20% (81 / 280)
4 78% (199 / 282) 80% (199 / 280)
Downloads last month
2

Collection including jojo1899/Llama-2-13b-chat-hf-ov-int4