This model is a quantized version of meta-llama/Llama-2-13b-hf and is converted to the OpenVINO format. This model was obtained via the nncf-quantization space with optimum-intel. First make sure you have optimum-intel installed:

pip install optimum[openvino]

To load your model you can do as follows:

from optimum.intel import OVModelForCausalLM
model_id = "AIFunOver/Llama-2-13b-hf-openvino-8bit"
model = OVModelForCausalLM.from_pretrained(model_id)
Downloads last month
13
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for AIFunOver/Llama-2-13b-hf-openvino-8bit

Quantized
(16)
this model