Create README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,60 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Qwen/Qwen2-7b
|
3 |
+
library_name: transformers
|
4 |
+
license: apache-2.0
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- conversational
|
10 |
+
quantized_by: fedric95
|
11 |
+
---
|
12 |
+
|
13 |
+
## Llamacpp Quantizations of Meta-Llama-3.1-8B
|
14 |
+
|
15 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3583">b3583</a> for quantization.
|
16 |
+
|
17 |
+
Original model: https://huggingface.co/google/Qwen2-7b
|
18 |
+
|
19 |
+
## Download a file (not the whole branch) from below:
|
20 |
+
|
21 |
+
| Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
|
22 |
+
| -------- | ---------- | --------- | ----------- |
|
23 |
+
| [Qwen2-7b.FP32.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b.FP32.gguf) | BF16 | 15.2GB | coming_soon |
|
24 |
+
| [Qwen2-7b-Q8_0.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q8_0.gguf) | Q8_0 | 8.1GB | 7.3817 +/- 0.04777 |
|
25 |
+
| [Qwen2-7b-Q6_K.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q6_K.gguf) | Q6_K | 6.25GB | 7.3914 +/- 0.04776 |
|
26 |
+
| [Qwen2-7b-Q5_K_M.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q5_K_M.gguf) | Q5_K_M | 5.44GB | 7.4067 +/- 0.04794 |
|
27 |
+
| [Qwen2-7b-Q5_K_S.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q5_K_S.gguf) | Q5_K_S | | 7.4291 +/- 0.04822 |
|
28 |
+
| [Qwen2-7b-Q4_K_M.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q4_K_M.gguf) | Q4_K_M | 4.68GB | 7.4796 +/- 0.04856 |
|
29 |
+
| [Qwen2-7b-Q4_K_S.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q4_K_S.gguf) | Q4_K_S | 4.46GB | 7.5221 +/- 0.04879 |
|
30 |
+
| [Qwen2-7b-Q3_K_L.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q3_K_L.gguf) | Q3_K_L | 4.09GB | 7.6843 +/- 0.05000 |
|
31 |
+
| [Qwen2-7b-Q3_K_M.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q3_K_M.gguf) | Q3_K_M | 3.81GB | 7.7390 +/- 0.05015 |
|
32 |
+
| [Qwen2-7b-Q3_K_S.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q3_K_S.gguf) | Q3_K_S | 3.49GB | 9.3743 +/- 0.06023 |
|
33 |
+
| [Qwen2-7b-Q2_K.gguf](https://huggingface.co/fedric95/Qwen2-7b-GGUF/blob/main/Qwen2-7b-Q2_K.gguf) | Q2_K | 3.02GB | 10.5122 +/- 0.06850 |
|
34 |
+
|
35 |
+
## Downloading using huggingface-cli
|
36 |
+
|
37 |
+
First, make sure you have hugginface-cli installed:
|
38 |
+
|
39 |
+
```
|
40 |
+
pip install -U "huggingface_hub[cli]"
|
41 |
+
```
|
42 |
+
|
43 |
+
Then, you can target the specific file you want:
|
44 |
+
|
45 |
+
```
|
46 |
+
huggingface-cli download fedric95/Qwen2-7b-GGUF --include "Qwen2-7b-Q4_K_M.gguf" --local-dir ./
|
47 |
+
```
|
48 |
+
|
49 |
+
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
50 |
+
|
51 |
+
```
|
52 |
+
huggingface-cli download fedric95/Qwen2-7b-GGUF --include "Qwen2-7b-Q8_0.gguf/*" --local-dir Qwen2-7b-Q8_0
|
53 |
+
```
|
54 |
+
|
55 |
+
You can either specify a new local-dir (Qwen2-7b-Q8_0) or download them all in place (./)
|
56 |
+
|
57 |
+
|
58 |
+
## Reproducibility
|
59 |
+
|
60 |
+
Same instructions of: https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638
|