datatab commited on
Commit
3e104ee
1 Parent(s): a5d1454

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -9
README.md CHANGED
@@ -1,22 +1,58 @@
1
  ---
2
  language:
3
- - en
4
- license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
- - unsloth
9
  - mistral
10
  - gguf
11
- base_model: datatab/Yugo45-GPT
 
 
 
 
 
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** datatab
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** datatab/Yugo45-GPT
19
 
20
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
+ - sr
4
+ license: cc
5
  tags:
6
  - text-generation-inference
7
  - transformers
 
8
  - mistral
9
  - gguf
10
+ zero base_model: gordicaleksa/YugoGPT
11
+ model_creator: Gordic Aleksa
12
+ model_type: mistral
13
+ quantized_by: datatab
14
+ datasets:
15
+ - datatab/alpaca-cleaned-serbian-full
16
  ---
17
 
18
+ # Yugo45-GPT-Quantized-GGUF
19
 
20
+ - **Quantized by:** datatab
21
  - **License:** apache-2.0
 
22
 
23
+ <!-- description start -->
24
+ ## Description
25
 
26
+ This repo contains GGUF format model files for [Yugo45-GPT](https://huggingface.co/datatab/Yugo45-GPT).
27
+
28
+ <!-- description end -->
29
+
30
+ # Quant. preference
31
+
32
+ | Quant. | Description |
33
+ |---------------|---------------------------------------------------------------------------------------|
34
+ | not_quantized | Recommended. Fast conversion. Slow inference, big files. |
35
+ | fast_quantized| Recommended. Fast conversion. OK inference, OK file size. |
36
+ | quantized | Recommended. Slow conversion. Fast inference, small files. |
37
+ | f32 | Not recommended. Retains 100% accuracy, but super slow and memory hungry. |
38
+ | f16 | Fastest conversion + retains 100% accuracy. Slow and memory hungry. |
39
+ | q8_0 | Fast conversion. High resource use, but generally acceptable. |
40
+ | q4_k_m | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K |
41
+ | q5_k_m | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K |
42
+ | q2_k | Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.|
43
+ | q3_k_l | Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K |
44
+ | q3_k_m | Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K |
45
+ | q3_k_s | Uses Q3_K for all tensors |
46
+ | q4_0 | Original quant method, 4-bit. |
47
+ | q4_1 | Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.|
48
+ | q4_k_s | Uses Q4_K for all tensors |
49
+ | q4_k | alias for q4_k_m |
50
+ | q5_k | alias for q5_k_m |
51
+ | q5_0 | Higher accuracy, higher resource usage and slower inference. |
52
+ | q5_1 | Even higher accuracy, resource usage and slower inference. |
53
+ | q5_k_s | Uses Q5_K for all tensors |
54
+ | q6_k | Uses Q8_K for all tensors |
55
+ | iq2_xxs | 2.06 bpw quantization |
56
+ | iq2_xs | 2.31 bpw quantization |
57
+ | iq3_xxs | 3.06 bpw quantization |
58
+ | q3_k_xs | 3-bit extra small quantization |