Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,29 +1,18 @@
|
|
1 |
---
|
2 |
-
base_model: openbmb/MiniCPM-V-2_6
|
3 |
-
datasets:
|
4 |
-
- openbmb/RLAIF-V-Dataset
|
5 |
-
language:
|
6 |
-
- multilingual
|
7 |
-
library_name: transformers
|
8 |
-
pipeline_tag: image-text-to-text
|
9 |
-
tags:
|
10 |
-
- minicpm-v
|
11 |
-
- vision
|
12 |
-
- ocr
|
13 |
-
- multi-image
|
14 |
-
- video
|
15 |
-
- custom_code
|
16 |
quantized_by: bartowski
|
|
|
17 |
---
|
18 |
|
19 |
-
|
20 |
|
21 |
-
|
22 |
-
|
23 |
-
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3600">b3600</a> for quantization.
|
24 |
|
25 |
Original model: https://huggingface.co/openbmb/MiniCPM-V-2_6
|
26 |
|
|
|
|
|
|
|
|
|
27 |
## Prompt format
|
28 |
|
29 |
```
|
@@ -34,6 +23,10 @@ Original model: https://huggingface.co/openbmb/MiniCPM-V-2_6
|
|
34 |
<|im_start|>assistant
|
35 |
```
|
36 |
|
|
|
|
|
|
|
|
|
37 |
## Download a file (not the whole branch) from below:
|
38 |
|
39 |
| Filename | Quant type | File Size | Split | Description |
|
@@ -49,12 +42,15 @@ Original model: https://huggingface.co/openbmb/MiniCPM-V-2_6
|
|
49 |
| [MiniCPM-V-2_6-Q4_K_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for must use cases, *recommended*. |
|
50 |
| [MiniCPM-V-2_6-Q3_K_XL.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_XL.gguf) | Q3_K_XL | 4.56GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
51 |
| [MiniCPM-V-2_6-Q4_K_S.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
|
52 |
-
| [MiniCPM-V-2_6-IQ4_XS.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-IQ4_XS.gguf) | IQ4_XS | 4.
|
53 |
| [MiniCPM-V-2_6-Q3_K_L.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
|
54 |
| [MiniCPM-V-2_6-Q3_K_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
|
55 |
| [MiniCPM-V-2_6-IQ3_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
56 |
| [MiniCPM-V-2_6-Q2_K_L.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
|
|
|
|
57 |
| [MiniCPM-V-2_6-Q2_K.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q2_K.gguf) | Q2_K | 3.01GB | false | Very low quality but surprisingly usable. |
|
|
|
58 |
|
59 |
## Embed/output weights
|
60 |
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
quantized_by: bartowski
|
3 |
+
pipeline_tag: text-generation
|
4 |
---
|
5 |
|
6 |
+
## Llamacpp imatrix Quantizations of MiniCPM-V-2_6
|
7 |
|
8 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3615">b3615</a> for quantization.
|
|
|
|
|
9 |
|
10 |
Original model: https://huggingface.co/openbmb/MiniCPM-V-2_6
|
11 |
|
12 |
+
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
13 |
+
|
14 |
+
Run them in [LM Studio](https://lmstudio.ai/)
|
15 |
+
|
16 |
## Prompt format
|
17 |
|
18 |
```
|
|
|
23 |
<|im_start|>assistant
|
24 |
```
|
25 |
|
26 |
+
## What's new:
|
27 |
+
|
28 |
+
Applying imatrix, should improve the text portion of the model
|
29 |
+
|
30 |
## Download a file (not the whole branch) from below:
|
31 |
|
32 |
| Filename | Quant type | File Size | Split | Description |
|
|
|
42 |
| [MiniCPM-V-2_6-Q4_K_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for must use cases, *recommended*. |
|
43 |
| [MiniCPM-V-2_6-Q3_K_XL.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_XL.gguf) | Q3_K_XL | 4.56GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
44 |
| [MiniCPM-V-2_6-Q4_K_S.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
|
45 |
+
| [MiniCPM-V-2_6-IQ4_XS.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
46 |
| [MiniCPM-V-2_6-Q3_K_L.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
|
47 |
| [MiniCPM-V-2_6-Q3_K_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
|
48 |
| [MiniCPM-V-2_6-IQ3_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
49 |
| [MiniCPM-V-2_6-Q2_K_L.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
50 |
+
| [MiniCPM-V-2_6-Q3_K_S.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
|
51 |
+
| [MiniCPM-V-2_6-IQ3_XS.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-IQ3_XS.gguf) | IQ3_XS | 3.34GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
52 |
| [MiniCPM-V-2_6-Q2_K.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-Q2_K.gguf) | Q2_K | 3.01GB | false | Very low quality but surprisingly usable. |
|
53 |
+
| [MiniCPM-V-2_6-IQ2_M.gguf](https://huggingface.co/bartowski/MiniCPM-V-2_6-GGUF/blob/main/MiniCPM-V-2_6-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
54 |
|
55 |
## Embed/output weights
|
56 |
|