hfl
/

Chinese-Alpaca-2-7B-RLHF-GGUF

This repository contains GGUF-v3 version (llama.cpp compatible) of Chinese-Alpaca-2-7B-RLHF, which is tuned on Chinese-Alpaca-2-7B with RLHF using DeepSpeed-Chat.

Performance

Metric: PPL, lower is better

Quant original imatrix (-im)
Q2_K 10.5211 +/- 0.14139 11.9331 +/- 0.16168
Q3_K 8.9748 +/- 0.12043 8.8238 +/- 0.11850
Q4_0 8.7843 +/- 0.11854 -
Q4_K 8.4643 +/- 0.11341 8.4226 +/- 0.11302
Q5_0 8.4563 +/- 0.11353 -
Q5_K 8.3722 +/- 0.11236 8.3336 +/- 0.11192
Q6_K 8.3207 +/- 0.11184 8.3047 +/- 0.11159
Q8_0 8.3100 +/- 0.11173 -
F16 8.3112 +/- 0.11173 -

The model with -im suffix is generated with important matrix, which has generally better performance (not always though).

Others

For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-rlhf

Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for more details.

Downloads last month
286
GGUF
Model size
6.93B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .