bartowski commited on
Commit
cd54a7d
1 Parent(s): f5aaf81

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +28 -39
README.md CHANGED
@@ -1,28 +1,18 @@
1
- ---
2
- license: mit
3
- license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
4
-
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- tags:
9
- - nlp
10
- - code
11
- widget:
12
- - messages:
13
- - role: user
14
- content: Can you provide ways to eat combinations of bananas and dragonfruits?
15
  quantized_by: bartowski
 
16
  ---
17
 
18
  ## Llamacpp imatrix Quantizations of Phi-3.1-mini-128k-instruct
19
 
20
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3291">b3291</a> for quantization.
21
 
22
  Original model: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
23
 
24
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
25
 
 
 
26
  ## Prompt format
27
 
28
  ```
@@ -31,30 +21,28 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
31
 
32
  ## Download a file (not the whole branch) from below:
33
 
34
- | Filename | Quant type | File Size | Description |
35
- | -------- | ---------- | --------- | ----------- |
36
- | [Phi-3.1-mini-128k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q8_0.gguf) | Q8_0 | 4.06GB | Extremely high quality, generally unneeded but max available quant. |
37
- | [Phi-3.1-mini-128k-instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q6_K_L.gguf) | Q6_K_L | 3.18GB | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
38
- | [Phi-3.1-mini-128k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q6_K.gguf) | Q6_K | 3.13GB | Very high quality, near perfect, *recommended*. |
39
- | [Phi-3.1-mini-128k-instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q5_K_L.gguf) | Q5_K_L | 2.87GB | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
40
- | [Phi-3.1-mini-128k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 2.81GB | High quality, *recommended*. |
41
- | [Phi-3.1-mini-128k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 2.64GB | High quality, *recommended*. |
42
- | [Phi-3.1-mini-128k-instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q4_K_L.gguf) | Q4_K_L | 2.46GB | Uses Q8_0 for embed and output weights. Good quality, uses about 4.83 bits per weight, *recommended*. |
43
- | [Phi-3.1-mini-128k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 2.39GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
44
- | [Phi-3.1-mini-128k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 2.18GB | Slightly lower quality with more space savings, *recommended*. |
45
- | [Phi-3.1-mini-128k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ4_XS.gguf) | IQ4_XS | 2.05GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
46
- | [Phi-3.1-mini-128k-instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_XL.gguf) | Q3_K_XL | 2.17GB | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
47
- | [Phi-3.1-mini-128k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 2.08GB | Lower quality but usable, good for low RAM availability. |
48
- | [Phi-3.1-mini-128k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 1.95GB | Even lower quality. |
49
- | [Phi-3.1-mini-128k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ3_M.gguf) | IQ3_M | 1.85GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
50
- | [Phi-3.1-mini-128k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. |
51
- | [Phi-3.1-mini-128k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ3_XS.gguf) | IQ3_XS | 1.62GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
52
- | [Phi-3.1-mini-128k-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
53
- | [Phi-3.1-mini-128k-instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q2_K_L.gguf) | Q2_K_L | 1.51GB | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
54
- | [Phi-3.1-mini-128k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q2_K.gguf) | Q2_K | 1.41GB | Very low quality but surprisingly usable. |
55
- | [Phi-3.1-mini-128k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ2_M.gguf) | IQ2_M | 1.31GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
56
- | [Phi-3.1-mini-128k-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ2_S.gguf) | IQ2_S | 1.21GB | Very low quality, uses SOTA techniques to be usable. |
57
- | [Phi-3.1-mini-128k-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ2_XS.gguf) | IQ2_XS | 1.15GB | Very low quality, uses SOTA techniques to be usable. |
58
 
59
  ## Credits
60
 
@@ -109,3 +97,4 @@ These I-quants can also be used on CPU and Apple Metal, but will be slower than
109
  The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
110
 
111
  Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
 
1
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  quantized_by: bartowski
3
+ pipeline_tag: text-generation
4
  ---
5
 
6
  ## Llamacpp imatrix Quantizations of Phi-3.1-mini-128k-instruct
7
 
8
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3460">b3460</a> for quantization.
9
 
10
  Original model: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
11
 
12
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
13
 
14
+ Run them in [LM Studio](https://lmstudio.ai/)
15
+
16
  ## Prompt format
17
 
18
  ```
 
21
 
22
  ## Download a file (not the whole branch) from below:
23
 
24
+ | Filename | Quant type | File Size | Split | Description |
25
+ | -------- | ---------- | --------- | ----- | ----------- |
26
+ | [Phi-3.1-mini-128k-instruct-f32.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-f32.gguf) | f32 | 15.29GB | false | Full F32 weights. |
27
+ | [Phi-3.1-mini-128k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q8_0.gguf) | Q8_0 | 4.06GB | false | Extremely high quality, generally unneeded but max available quant. |
28
+ | [Phi-3.1-mini-128k-instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q6_K_L.gguf) | Q6_K_L | 3.18GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
29
+ | [Phi-3.1-mini-128k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q6_K.gguf) | Q6_K | 3.14GB | false | Very high quality, near perfect, *recommended*. |
30
+ | [Phi-3.1-mini-128k-instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q5_K_L.gguf) | Q5_K_L | 2.88GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
31
+ | [Phi-3.1-mini-128k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 2.82GB | false | High quality, *recommended*. |
32
+ | [Phi-3.1-mini-128k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 2.64GB | false | High quality, *recommended*. |
33
+ | [Phi-3.1-mini-128k-instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q4_K_L.gguf) | Q4_K_L | 2.47GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
34
+ | [Phi-3.1-mini-128k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 2.39GB | false | Good quality, default size for must use cases, *recommended*. |
35
+ | [Phi-3.1-mini-128k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 2.19GB | false | Slightly lower quality with more space savings, *recommended*. |
36
+ | [Phi-3.1-mini-128k-instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_XL.gguf) | Q3_K_XL | 2.17GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
37
+ | [Phi-3.1-mini-128k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 2.09GB | false | Lower quality but usable, good for low RAM availability. |
38
+ | [Phi-3.1-mini-128k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ4_XS.gguf) | IQ4_XS | 2.06GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
39
+ | [Phi-3.1-mini-128k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 1.96GB | false | Low quality. |
40
+ | [Phi-3.1-mini-128k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ3_M.gguf) | IQ3_M | 1.86GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
41
+ | [Phi-3.1-mini-128k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 1.68GB | false | Low quality, not recommended. |
42
+ | [Phi-3.1-mini-128k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ3_XS.gguf) | IQ3_XS | 1.63GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
43
+ | [Phi-3.1-mini-128k-instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q2_K_L.gguf) | Q2_K_L | 1.51GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
44
+ | [Phi-3.1-mini-128k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-Q2_K.gguf) | Q2_K | 1.42GB | false | Very low quality but surprisingly usable. |
45
+ | [Phi-3.1-mini-128k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-128k-instruct-GGUF/blob/main/Phi-3.1-mini-128k-instruct-IQ2_M.gguf) | IQ2_M | 1.32GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
 
 
46
 
47
  ## Credits
48
 
 
97
  The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
98
 
99
  Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
100
+