kewin4933 commited on
Commit
2e86dee
1 Parent(s): 4430239

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -13,3 +13,5 @@ the two models also can be loaded by the [llama.cpp](https://github.com/ggergano
13
  InferLLM support the ChatGLM/ChatGLM2 model, the chatglm-q4/bin/chatglm2-q4.bin is the int4 quantized model from [chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)/[chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b)
14
 
15
  InferLLM support the baichuan model, the baichuan-q4 is the int4 quantized model from [baichuan](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
 
 
 
13
  InferLLM support the ChatGLM/ChatGLM2 model, the chatglm-q4/bin/chatglm2-q4.bin is the int4 quantized model from [chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)/[chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b)
14
 
15
  InferLLM support the baichuan model, the baichuan-q4 is the int4 quantized model from [baichuan](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
16
+
17
+ InferLLM support the llama2 model, the llama2-q4 is the int4 quantized model from [llama2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)