hfl
/

Chinese-LLaMA-2-7B-16K-GGUF

This repository contains the GGUF-v3 models (llama.cpp compatible) for Chinese-LLaMA-2-7B-16K.

Performance

Metric: PPL, lower is better

Quant original imatrix (-im)
Q2_K 11.5580 +/- 0.23848 12.3757 +/- 0.26048
Q3_K 9.8263 +/- 0.20663 9.7124 +/- 0.20569
Q4_0 9.6558 +/- 0.20657 -
Q4_K 9.5590 +/- 0.20460 9.4945 +/- 0.20337
Q5_0 9.2767 +/- 0.19835 -
Q5_K 9.4303 +/- 0.20305 9.4275 +/- 0.20291
Q6_K 9.4046 +/- 0.20272 9.4106 +/- 0.20284
Q8_0 9.2145 +/- 0.19943 -
F16 9.4045 +/- 0.20289 -

The model with -im suffix is generated with important matrix, which has generally better performance (not always though).

Others

For Hugging Face version, please see: https://huggingface.co/hfl/chinese-llama-2-7b-16k

Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for more details.

Downloads last month
360
GGUF
Model size
6.93B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .