koesn commited on
Commit
40c3429
1 Parent(s): e0c60dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -8,15 +8,7 @@ license: cc-by-nc-4.0
8
 
9
  This repo contains GGUF format model files for Turdus-7B-GGUF.
10
 
11
- ## Model Info
12
-
13
- | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
14
- | ------------ | ------- | ------------------ | ---------- | ----------- | ------------- |
15
- | udkai/Turdus | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
16
-
17
- ![Turdus-7B](https://i.ibb.co/HhHqy4F/Turdus-7-B.png)
18
-
19
- ## Provided Files
20
 
21
  | Name | Quant | Bits | File Size | Remark |
22
  | --------------------- | ------ | ---- | --------- | -------------------------------- |
@@ -29,6 +21,16 @@ This repo contains GGUF format model files for Turdus-7B-GGUF.
29
  | turdus-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl @ LLaMA-v1-7B |
30
  | turdus-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl @ LLaMA-v1-7B |
31
 
 
 
 
 
 
 
 
 
 
 
32
  # Original Model Card
33
 
34
  ---
 
8
 
9
  This repo contains GGUF format model files for Turdus-7B-GGUF.
10
 
11
+ ## Files Provided
 
 
 
 
 
 
 
 
12
 
13
  | Name | Quant | Bits | File Size | Remark |
14
  | --------------------- | ------ | ---- | --------- | -------------------------------- |
 
21
  | turdus-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl @ LLaMA-v1-7B |
22
  | turdus-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl @ LLaMA-v1-7B |
23
 
24
+ ## Parameters
25
+
26
+ | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
27
+ | ------------ | ------- | ------------------ | ---------- | ----------- | ------------- |
28
+ | udkai/Turdus | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
29
+
30
+ ## Benchmarks
31
+
32
+ ![](https://i.ibb.co/jgS4ZNP/Turdus-7-B.png)
33
+
34
  # Original Model Card
35
 
36
  ---