Text Generation
GGUF
Indonesian
English
Ichsan2895 commited on
Commit
ccda4ec
1 Parent(s): 121a49e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -51,14 +51,14 @@ They are also compatible with many third party UIs and libraries - please see th
51
 
52
  | Name | Quant method | Bits | Size | Use case |
53
  | ---- | ---- | ---- | ---- | ----- |
54
- | [Merak-7B-v4-PROTOTYPE6-model-Q2_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q2_k.gguf) | Q2_K | 2 | 2.83 GB| smallest, significant quality loss - not recommended for most purposes |
55
- | [Merak-7B-v4-PROTOTYPE6-model-Q3_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.3 GB| very small, high quality loss |
56
- | [Merak-7B-v4-PROTOTYPE6-model-Q4_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
57
- | [Merak-7B-v4-PROTOTYPE6-model-Q4_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q4_k_m.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended |
58
- | [Merak-7B-v4-PROTOTYPE6-model-Q5_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
59
- | [Merak-7B-v4-PROTOTYPE6-model-Q5_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q5_k_m.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended |
60
- | [Merak-7B-v4-PROTOTYPE6-model-Q6_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q6_k.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss |
61
- | [Merak-7B-v4-PROTOTYPE6-model-Q8_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended |
62
 
63
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
64
 
 
51
 
52
  | Name | Quant method | Bits | Size | Use case |
53
  | ---- | ---- | ---- | ---- | ----- |
54
+ | [Merak-7B-v4-PROTOTYPE6-model-Q2_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q2_k.gguf) | Q2_K | 2 | 3.08 GB| smallest, significant quality loss - not recommended for most purposes |
55
+ | [Merak-7B-v4-PROTOTYPE6-model-Q3_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
56
+ | [Merak-7B-v4-PROTOTYPE6-model-Q4_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
57
+ | [Merak-7B-v4-PROTOTYPE6-model-Q4_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
58
+ | [Merak-7B-v4-PROTOTYPE6-model-Q5_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q5_0.gguf) | Q5_0 | 5 | 5 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
59
+ | [Merak-7B-v4-PROTOTYPE6-model-Q5_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended |
60
+ | [Merak-7B-v4-PROTOTYPE6-model-Q6_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q6_k.gguf) | Q6_K | 6 | 5.94 GB| very large, extremely low quality loss |
61
+ | [Merak-7B-v4-PROTOTYPE6-model-Q8_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v4-PROTOTYPE6-GGUF/blob/main/Merak-7B-v4-PROTOTYPE6-model-q8_0.gguf) | Q8_0 | 8 | 7.7 GB| very large, extremely low quality loss - not recommended |
62
 
63
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
64