Update README.md
Browse files
README.md
CHANGED
@@ -10,22 +10,22 @@ This repo contains GGUF format model files for Turdus-7B-GGUF.
|
|
10 |
|
11 |
## Model Info
|
12 |
|
13 |
-
| path
|
14 |
-
|
|
15 |
-
| udkai/Turdus | mistral | MistralForCausalLM | 10000.0
|
16 |
|
17 |
## Provided Files
|
18 |
|
19 |
-
|
|
20 |
-
|
|
21 |
-
| turdus-7b.IQ3_S.gguf
|
22 |
-
| turdus-7b.IQ3_M.gguf
|
23 |
-
| turdus-7b.Q4_0.gguf
|
24 |
-
| turdus-7b.IQ4_NL.gguf | IQ4_NL |
|
25 |
-
| turdus-7b.Q4_K_M.gguf | Q4_K_M |
|
26 |
-
| turdus-7b.Q5_K_M.gguf | Q5_K_M |
|
27 |
-
| turdus-7b.Q6_K.gguf
|
28 |
-
| turdus-7b.Q8_0.gguf
|
29 |
|
30 |
# Original Model Card
|
31 |
|
|
|
10 |
|
11 |
## Model Info
|
12 |
|
13 |
+
| path | type | architecture | rope_theta | sliding_win | max_pos_embed |
|
14 |
+
| ------------ | ------- | ------------------ | ---------- | ----------- | ------------- |
|
15 |
+
| udkai/Turdus | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
|
16 |
|
17 |
## Provided Files
|
18 |
|
19 |
+
| Name | Quant | Bits | File Size | Remark |
|
20 |
+
| --------------------- | ------ | -----| --------- | -------------------------------- |
|
21 |
+
| turdus-7b.IQ3_S.gguf | IQ3_S | 3 | 3.18 GB | 3.44 bpw quantization |
|
22 |
+
| turdus-7b.IQ3_M.gguf | IQ3_M | 3 | 3.28 GB | 3.66 bpw quantization mix |
|
23 |
+
| turdus-7b.Q4_0.gguf | Q4_0 | 4 | 4.11 GB | 3.56G, +0.2166 ppl @ LLaMA-v1-7B |
|
24 |
+
| turdus-7b.IQ4_NL.gguf | IQ4_NL | 4 | 4.16 GB | 4.25 bpw non-linear quantization |
|
25 |
+
| turdus-7b.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 3.80G, +0.0532 ppl @ LLaMA-v1-7B |
|
26 |
+
| turdus-7b.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 4.45G, +0.0122 ppl @ LLaMA-v1-7B |
|
27 |
+
| turdus-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl @ LLaMA-v1-7B |
|
28 |
+
| turdus-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl @ LLaMA-v1-7B |
|
29 |
|
30 |
# Original Model Card
|
31 |
|