Initial GGCC model commit
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ license: other
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
-
<p><a href="https://discord.gg/
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
@@ -19,31 +19,32 @@ license: other
|
|
19 |
|
20 |
# Eric Hartford's WizardLM Uncensored Falcon 7B GGML
|
21 |
|
22 |
-
These files are
|
23 |
|
24 |
-
These
|
25 |
|
26 |
-
|
27 |
-
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui).
|
28 |
-
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers).
|
29 |
-
* A new fork of llama.cpp that introduced this new Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
|
30 |
|
31 |
-
|
32 |
|
33 |
## Repositories available
|
34 |
|
35 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GPTQ)
|
36 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GGML)
|
37 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b)
|
38 |
-
|
39 |
-
<!-- compatibility_ggml start -->
|
40 |
-
## Compatibility
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
|
|
|
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
```
|
49 |
git clone https://github.com/cmp-nct/ggllm.cpp
|
@@ -55,37 +56,38 @@ Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VS
|
|
55 |
|
56 |
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
|
57 |
```
|
58 |
-
bin/falcon_main -t 8 -ngl 100 -b 1 -m
|
59 |
```
|
60 |
|
61 |
-
|
|
|
|
|
62 |
|
63 |
Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have.
|
64 |
|
65 |
`-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter.
|
66 |
|
|
|
|
|
67 |
<!-- compatibility_ggml end -->
|
68 |
|
69 |
## Provided files
|
70 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
71 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
72 |
-
|
|
73 |
-
|
|
74 |
-
|
|
75 |
-
|
|
76 |
-
|
|
77 |
-
| wizard-falcon-7b.ggmlv3.fp16.bin | fp16 | 16 | 14.44 GB | 16.94 GB | 16-bit. Included for further conversions and for experimentation. Not recommended for normal use. |
|
78 |
|
79 |
-
**
|
80 |
-
- the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
81 |
-
- it is not currently possible to use the new k-quant formats with Falcon 7B. This is being worked on.
|
82 |
|
83 |
<!-- footer start -->
|
84 |
## Discord
|
85 |
|
86 |
For further support, and discussions on these models and AI in general, join us at:
|
87 |
|
88 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
89 |
|
90 |
## Thanks, and how to contribute.
|
91 |
|
@@ -100,9 +102,9 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
100 |
* Patreon: https://patreon.com/TheBlokeAI
|
101 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
102 |
|
103 |
-
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz
|
104 |
|
105 |
-
**Patreon special mentions**:
|
106 |
|
107 |
Thank you to all my generous patrons and donaters!
|
108 |
|
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
|
|
19 |
|
20 |
# Eric Hartford's WizardLM Uncensored Falcon 7B GGML
|
21 |
|
22 |
+
These files are GGML format model files for [Eric Hartford's WizardLM Uncensored Falcon 7B](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b).
|
23 |
|
24 |
+
These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
|
25 |
|
26 |
+
GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
|
|
|
|
|
|
|
27 |
|
28 |
+
Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon.
|
29 |
|
30 |
## Repositories available
|
31 |
|
32 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GPTQ)
|
33 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GGML)
|
34 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b)
|
|
|
|
|
|
|
35 |
|
36 |
+
## Prompt template: WizardLM
|
37 |
|
38 |
+
```
|
39 |
+
prompt
|
40 |
+
### Response:
|
41 |
|
42 |
+
```
|
43 |
+
|
44 |
+
<!-- compatibility_ggml start -->
|
45 |
+
## Compatibility
|
46 |
+
|
47 |
+
To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps:
|
48 |
|
49 |
```
|
50 |
git clone https://github.com/cmp-nct/ggllm.cpp
|
|
|
56 |
|
57 |
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
|
58 |
```
|
59 |
+
bin/falcon_main -t 8 -ngl 100 -b 1 -m wizardlm-7b-uncensored.ggccv1.q4_0.bin.ggccv1.q4_0.bin -enc -p "write a story about llamas"
|
60 |
```
|
61 |
|
62 |
+
Parameter `-enc` should automatically use the right prompt template for the model, so you can just enter your desired prompt.
|
63 |
+
|
64 |
+
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
|
65 |
|
66 |
Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have.
|
67 |
|
68 |
`-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter.
|
69 |
|
70 |
+
Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions.
|
71 |
+
|
72 |
<!-- compatibility_ggml end -->
|
73 |
|
74 |
## Provided files
|
75 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
76 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
77 |
+
| wizardlm-7b-uncensored.ggccv1.q4_0.bin | q4_0 | 4 | 4.06 GB| 6.56 GB | Original quant method, 4-bit. |
|
78 |
+
| wizardlm-7b-uncensored.ggccv1.q4_1.bin | q4_1 | 4 | 4.51 GB| 7.01 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
79 |
+
| wizardlm-7b-uncensored.ggccv1.q5_0.bin | q5_0 | 5 | 4.96 GB| 7.46 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
80 |
+
| wizardlm-7b-uncensored.ggccv1.q5_1.bin | q5_1 | 5 | 5.42 GB| 7.92 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
81 |
+
| wizardlm-7b-uncensored.ggccv1.q8_0.bin | q8_0 | 8 | 7.67 GB| 10.17 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
|
|
82 |
|
83 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
|
|
84 |
|
85 |
<!-- footer start -->
|
86 |
## Discord
|
87 |
|
88 |
For further support, and discussions on these models and AI in general, join us at:
|
89 |
|
90 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
91 |
|
92 |
## Thanks, and how to contribute.
|
93 |
|
|
|
102 |
* Patreon: https://patreon.com/TheBlokeAI
|
103 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
104 |
|
105 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
|
106 |
|
107 |
+
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
|
108 |
|
109 |
Thank you to all my generous patrons and donaters!
|
110 |
|