bartowski commited on
Commit
7dd9d5e
1 Parent(s): 6bb387a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -1,14 +1,11 @@
1
  ---
2
- base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
3
- library_name: transformers
4
- license: apache-2.0
5
- pipeline_tag: text-generation
6
  quantized_by: bartowski
 
7
  ---
8
 
9
- ## Llamacpp imatrix Quantizations of Rombos-LLM-V2.5-Qwen-32b
10
 
11
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3825">b3825</a> for quantization.
12
 
13
  Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b
14
 
@@ -31,6 +28,7 @@ Run them in [LM Studio](https://lmstudio.ai/)
31
  | Filename | Quant type | File Size | Split | Description |
32
  | -------- | ---------- | --------- | ----- | ----------- |
33
  | [Replete-LLM-V2.5-Qwen-32b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/tree/main/Replete-LLM-V2.5-Qwen-32b-f16) | f16 | 65.54GB | true | Full F16 weights. |
 
34
  | [Replete-LLM-V2.5-Qwen-32b-Q8_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
35
  | [Replete-LLM-V2.5-Qwen-32b-Q6_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
36
  | [Replete-LLM-V2.5-Qwen-32b-Q6_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
@@ -41,6 +39,7 @@ Run them in [LM Studio](https://lmstudio.ai/)
41
  | [Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for must use cases, *recommended*. |
42
  | [Replete-LLM-V2.5-Qwen-32b-Q4_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
43
  | [Replete-LLM-V2.5-Qwen-32b-Q4_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, generally not worth using over similarly sized formats |
 
44
  | [Replete-LLM-V2.5-Qwen-32b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
45
  | [Replete-LLM-V2.5-Qwen-32b-IQ4_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
46
  | [Replete-LLM-V2.5-Qwen-32b-Q3_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |
 
1
  ---
 
 
 
 
2
  quantized_by: bartowski
3
+ pipeline_tag: text-generation
4
  ---
5
 
6
+ ## Llamacpp imatrix Quantizations of Replete-LLM-V2.5-Qwen-32b
7
 
8
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3972">b3972</a> for quantization.
9
 
10
  Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b
11
 
 
28
  | Filename | Quant type | File Size | Split | Description |
29
  | -------- | ---------- | --------- | ----- | ----------- |
30
  | [Replete-LLM-V2.5-Qwen-32b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/tree/main/Replete-LLM-V2.5-Qwen-32b-f16) | f16 | 65.54GB | true | Full F16 weights. |
31
+ | [Replete-LLM-V2.5-Qwen-32b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/tree/main/Replete-LLM-V2.5-Qwen-32b-f16) | f16 | 65.54GB | true | Full F16 weights. |
32
  | [Replete-LLM-V2.5-Qwen-32b-Q8_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
33
  | [Replete-LLM-V2.5-Qwen-32b-Q6_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
34
  | [Replete-LLM-V2.5-Qwen-32b-Q6_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
 
39
  | [Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for must use cases, *recommended*. |
40
  | [Replete-LLM-V2.5-Qwen-32b-Q4_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
41
  | [Replete-LLM-V2.5-Qwen-32b-Q4_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, generally not worth using over similarly sized formats |
42
+ | [Replete-LLM-V2.5-Qwen-32b-IQ4_NL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. |
43
  | [Replete-LLM-V2.5-Qwen-32b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
44
  | [Replete-LLM-V2.5-Qwen-32b-IQ4_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
45
  | [Replete-LLM-V2.5-Qwen-32b-Q3_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |