Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,11 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -21,12 +26,7 @@ license: other
|
|
21 |
|
22 |
These files are GGML format model files for [HuggingFaceH4's Starchat Beta](https://huggingface.co/HuggingFaceH4/starchat-beta).
|
23 |
|
24 |
-
|
25 |
-
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
26 |
-
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
|
27 |
-
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
|
28 |
-
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
29 |
-
* [ctransformers](https://github.com/marella/ctransformers)
|
30 |
|
31 |
## Repositories available
|
32 |
|
@@ -35,31 +35,23 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
38 |
-
##
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
|
51 |
-
|
52 |
-
## Explanation of the new k-quant methods
|
53 |
-
|
54 |
-
The new methods available are:
|
55 |
-
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
56 |
-
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
57 |
-
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
58 |
-
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
59 |
-
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
60 |
-
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
61 |
-
|
62 |
-
Refer to the Provided Files table below to see what files use which methods, and how.
|
63 |
<!-- compatibility_ggml end -->
|
64 |
|
65 |
## Provided files
|
@@ -71,26 +63,6 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
71 |
| starchat-beta.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
72 |
| starchat-beta.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
73 |
|
74 |
-
|
75 |
-
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
76 |
-
|
77 |
-
## How to run in `llama.cpp`
|
78 |
-
|
79 |
-
I use the following command line; adjust for your tastes and needs:
|
80 |
-
|
81 |
-
```
|
82 |
-
./main -t 10 -ngl 32 -m starchat-beta.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
|
83 |
-
```
|
84 |
-
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
85 |
-
|
86 |
-
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
87 |
-
|
88 |
-
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
89 |
-
|
90 |
-
## How to run in `text-generation-webui`
|
91 |
-
|
92 |
-
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
93 |
-
|
94 |
<!-- footer start -->
|
95 |
## Discord
|
96 |
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
+
tags:
|
4 |
+
- generated_from_trainer
|
5 |
+
model-index:
|
6 |
+
- name: starchat-beta
|
7 |
+
results: []
|
8 |
+
license: bigcode-openrail-m
|
9 |
---
|
10 |
|
11 |
<!-- header start -->
|
|
|
26 |
|
27 |
These files are GGML format model files for [HuggingFaceH4's Starchat Beta](https://huggingface.co/HuggingFaceH4/starchat-beta).
|
28 |
|
29 |
+
Please note that these GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files.
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
## Repositories available
|
32 |
|
|
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
38 |
+
## Compatibilty
|
39 |
|
40 |
+
These files are **not** compatible with llama.cpp.
|
41 |
|
42 |
+
Currently they can be used with:
|
43 |
+
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
|
44 |
+
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
|
45 |
+
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
|
46 |
+
* [rustformers' llm](https://github.com/rustformers/llm)
|
47 |
+
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
|
48 |
|
49 |
+
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
|
50 |
|
51 |
+
## Tutorial for using GPT4All-UI
|
52 |
|
53 |
+
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
|
54 |
+
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
<!-- compatibility_ggml end -->
|
56 |
|
57 |
## Provided files
|
|
|
63 |
| starchat-beta.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
64 |
| starchat-beta.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
<!-- footer start -->
|
67 |
## Discord
|
68 |
|