Update README.md
Browse files
README.md
CHANGED
@@ -11,4 +11,53 @@ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review
|
|
11 |
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
12 |
Face and click below. Requests are processed immediately.
|
13 |
extra_gated_button_content: Acknowledge license
|
14 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
12 |
Face and click below. Requests are processed immediately.
|
13 |
extra_gated_button_content: Acknowledge license
|
14 |
+
---
|
15 |
+
|
16 |
+
## Llamacpp Quantizations of Meta-Llama-3.1-8B
|
17 |
+
|
18 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3583">b3583</a> for quantization.
|
19 |
+
|
20 |
+
Original model: https://huggingface.co/google/gemma-2-2b
|
21 |
+
|
22 |
+
## Download a file (not the whole branch) from below:
|
23 |
+
|
24 |
+
| Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
|
25 |
+
| -------- | ---------- | --------- | ----------- |
|
26 |
+
| [gemma-2-2b.FP32.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b.FP32.gguf) | FP32 | 10.50GB | 8.9236 +/- 0.06373 |
|
27 |
+
| [gemma-2-2b-Q8_0.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q8_0.gguf) | Q8_0 | 2.78GB | 8.9299 +/- 0.06377 |
|
28 |
+
| [gemma-2-2b-Q6_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q6_K.gguf) | Q6_K | 2.15GB | 8.9570 +/- 0.06404 |
|
29 |
+
| [gemma-2-2b-Q5_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q5_K_M.gguf) | Q5_K_M | 1.92GB | 9.0061 +/- 0.06461 |
|
30 |
+
| [gemma-2-2b-Q5_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q5_K_S.gguf) | Q5_K_S | 1.88GB | 9.0096 +/- 0.06451|
|
31 |
+
| [gemma-2-2b-Q4_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q4_K_M.gguf) | Q4_K_M | 1.71GB | 9.2260 +/- 0.06643 |
|
32 |
+
| [gemma-2-2b-Q4_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q4_K_S.gguf) | Q4_K_S | 1.64GB | 9.3116 +/- 0.06726 |
|
33 |
+
| [gemma-2-2b-Q3_K_L.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q3_K_L.gguf) | Q3_K_L | 1.55GB | 9.5683 +/- 0.06909 |
|
34 |
+
| [gemma-2-2b-Q3_K_M.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q3_K_M.gguf) | Q3_K_M | 1.46GB | 9.7759 +/- 0.07120 |
|
35 |
+
| [gemma-2-2b-Q3_K_S.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q3_K_S.gguf) | Q3_K_S | 1.36GB | 10.8067 +/- 0.08032 |
|
36 |
+
| [gemma-2-2b-Q2_K.gguf](https://huggingface.co/fedric95/Meta-Llama-3.1-8B-GGUF/blob/main/gemma-2-2b-Q2_K.gguf) | Q2_K | 1.23GB | 13.8994 +/- 0.10723 |
|
37 |
+
|
38 |
+
## Downloading using huggingface-cli
|
39 |
+
|
40 |
+
First, make sure you have hugginface-cli installed:
|
41 |
+
|
42 |
+
```
|
43 |
+
pip install -U "huggingface_hub[cli]"
|
44 |
+
```
|
45 |
+
|
46 |
+
Then, you can target the specific file you want:
|
47 |
+
|
48 |
+
```
|
49 |
+
huggingface-cli download fedric95/gemma-2-2b-GGUF --include "gemma-2-2b-Q4_K_M.gguf" --local-dir ./
|
50 |
+
```
|
51 |
+
|
52 |
+
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
53 |
+
|
54 |
+
```
|
55 |
+
huggingface-cli download fedric95/gemma-2-2b-GGUF --include "gemma-2-2b-Q8_0.gguf/*" --local-dir gemma-2-2b-Q8_0
|
56 |
+
```
|
57 |
+
|
58 |
+
You can either specify a new local-dir (gemma-2-2b-Q8_0) or download them all in place (./)
|
59 |
+
|
60 |
+
|
61 |
+
## Reproducibility
|
62 |
+
|
63 |
+
https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638
|