TheBloke commited on
Commit
6e5f539
1 Parent(s): 99046b4

Initial GGML model commit. Requires llama.cpp from May 12th onwards.

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - causal-lm
6
+ - llama
7
+ inference: false
8
+ ---
9
+ # Wizard-Vicuna-13B-GGML
10
+
11
+ This is GGML format quantised 4bit and 5bit models of [Eric Hartford's 'uncensored' training of Wizard-Vicuna 13B](https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored).
12
+
13
+ It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
14
+
15
+ ## Repositories available
16
+
17
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ).
18
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GGML).
19
+ * [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
20
+
21
+ ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
22
+
23
+ llama.cpp recently made a breaking change to its quantisation methods.
24
+
25
+ I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
26
+
27
+ The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
28
+
29
+ ## Provided files
30
+ | Name | Quant method | Bits | Size | RAM required | Use case |
31
+ | ---- | ---- | ---- | ---- | ---- | ----- |
32
+ `Wizard-Vicuna-13B-Uncensored.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
33
+ `Wizard-Vicuna-13B-Uncensored.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
34
+ `Wizard-Vicuna-13B-Uncensored.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
35
+ `Wizard-Vicuna-13B-Uncensored.ggml.q8_0.bin` | q8_0 | 8bit | 15GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
36
+
37
+ ## How to run in `llama.cpp`
38
+
39
+ I use the following command line; adjust for your tastes and needs:
40
+
41
+ ```
42
+ ./main -t 8 -m Wizard-Vicuna-13B-Uncensored.ggml.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
43
+ ```
44
+
45
+ Change `-t 8` to the number of physical CPU cores you have.
46
+
47
+ ## How to run in `text-generation-webui`
48
+
49
+ GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
50
+
51
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
52
+
53
+ Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.
54
+
55
+ **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) which may be useful to get text-gen-ui working with the new llama.cpp quant methods sooner.
56
+
57
+ # Original model card
58
+
59
+ This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
60
+
61
+ Shout out to the open source AI/ML community, and everyone who helped me out.
62
+
63
+ Note:
64
+
65
+ An uncensored model has no guardrails.
66
+
67
+ You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
68
+
69
+ Publishing anything this model generates is the same as publishing it yourself.
70
+
71
+ You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Wizard-Vicuna-13B-Uncensored.ggml.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08603b3c40a0c6dc29d1317e1a4873dba95c148641c45c014f3830a8f50a7604
3
+ size 8136770688
Wizard-Vicuna-13B-Uncensored.ggml.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd8421aa5d0f1215581792d2b9a3355ec9076987a988878ed20df1cfed810c72
3
+ size 8950236288
Wizard-Vicuna-13B-Uncensored.ggml.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f6e9ef7dcea3bd4255160b6b0d113c7610758ee8da7f7749f08ac43721a793d
3
+ size 9763701888
Wizard-Vicuna-13B-Uncensored.ggml.q8_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85eb70c14d6ea58287307929dea4d80c06582d80ef02a89f1452e7d8b5254591
3
+ size 14644495488