Update README.md
Browse files
README.md
CHANGED
@@ -48,8 +48,16 @@ This repo contains GGUF format model files for [01-ai/Yi-34B-Chat](https://huggi
|
|
48 |
|
49 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
## Prompt template
|
52 |
|
|
|
53 |
```
|
54 |
<|im_start|>system
|
55 |
{system_prompt}<|im_end|>
|
@@ -62,18 +70,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
62 |
|
63 |
| Filename | Quant type | File Size | Description |
|
64 |
| -------- | ---------- | --------- | ----------- |
|
65 |
-
| [Yi-34B-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
66 |
-
| [Yi-34B-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
67 |
-
| [Yi-34B-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
68 |
-
| [Yi-34B-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
69 |
-
| [Yi-34B-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
70 |
-
| [Yi-34B-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
71 |
-
| [Yi-34B-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
72 |
-
| [Yi-34B-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
73 |
-
| [Yi-34B-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
74 |
-
| [Yi-34B-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
75 |
-
| [Yi-34B-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
76 |
-
| [Yi-34B-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/
|
77 |
|
78 |
|
79 |
## Downloading instruction
|
|
|
48 |
|
49 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
50 |
|
51 |
+
|
52 |
+
<div style="text-align: left; margin: 20px 0;">
|
53 |
+
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
54 |
+
Run them on the TensorBlock client using your local machine ↗
|
55 |
+
</a>
|
56 |
+
</div>
|
57 |
+
|
58 |
## Prompt template
|
59 |
|
60 |
+
|
61 |
```
|
62 |
<|im_start|>system
|
63 |
{system_prompt}<|im_end|>
|
|
|
70 |
|
71 |
| Filename | Quant type | File Size | Description |
|
72 |
| -------- | ---------- | --------- | ----------- |
|
73 |
+
| [Yi-34B-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q2_K.gguf) | Q2_K | 11.944 GB | smallest, significant quality loss - not recommended for most purposes |
|
74 |
+
| [Yi-34B-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q3_K_S.gguf) | Q3_K_S | 13.933 GB | very small, high quality loss |
|
75 |
+
| [Yi-34B-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q3_K_M.gguf) | Q3_K_M | 15.511 GB | very small, high quality loss |
|
76 |
+
| [Yi-34B-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q3_K_L.gguf) | Q3_K_L | 16.894 GB | small, substantial quality loss |
|
77 |
+
| [Yi-34B-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q4_0.gguf) | Q4_0 | 18.130 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
78 |
+
| [Yi-34B-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q4_K_S.gguf) | Q4_K_S | 18.253 GB | small, greater quality loss |
|
79 |
+
| [Yi-34B-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q4_K_M.gguf) | Q4_K_M | 19.240 GB | medium, balanced quality - recommended |
|
80 |
+
| [Yi-34B-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q5_0.gguf) | Q5_0 | 22.080 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
81 |
+
| [Yi-34B-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q5_K_S.gguf) | Q5_K_S | 22.080 GB | large, low quality loss - recommended |
|
82 |
+
| [Yi-34B-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q5_K_M.gguf) | Q5_K_M | 22.651 GB | large, very low quality loss - recommended |
|
83 |
+
| [Yi-34B-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q6_K.gguf) | Q6_K | 26.276 GB | very large, extremely low quality loss |
|
84 |
+
| [Yi-34B-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/Yi-34B-Chat-GGUF/blob/main/Yi-34B-Chat-Q8_0.gguf) | Q8_0 | 34.033 GB | very large, extremely low quality loss - not recommended |
|
85 |
|
86 |
|
87 |
## Downloading instruction
|