Upload README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,8 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
inference: false
|
3 |
license: llama2
|
4 |
model_creator: Kai Howard
|
@@ -37,6 +41,7 @@ This repo contains GGML format model files for [Kai Howard's PuddleJumper 13B](h
|
|
37 |
|
38 |
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
39 |
|
|
|
40 |
### About GGML
|
41 |
|
42 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
@@ -61,6 +66,7 @@ You are a helpful AI assistant.
|
|
61 |
|
62 |
USER: {prompt}
|
63 |
ASSISTANT:
|
|
|
64 |
```
|
65 |
|
66 |
<!-- compatibility_ggml start -->
|
@@ -118,7 +124,7 @@ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6
|
|
118 |
For compatibility with latest llama.cpp, please use GGUF files instead.
|
119 |
|
120 |
```
|
121 |
-
./main -t 10 -ngl 32 -m puddlejumper-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "
|
122 |
```
|
123 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
124 |
|
@@ -157,7 +163,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
157 |
|
158 |
**Special thanks to**: Aemon Algiz.
|
159 |
|
160 |
-
**Patreon special mentions**:
|
161 |
|
162 |
|
163 |
Thank you to all my generous patrons and donaters!
|
@@ -171,6 +177,14 @@ And thank you again to a16z for their generous grant.
|
|
171 |
|
172 |
Merge of EverythingLM-V2-13b QLoRa and OpenOrca-Platypus2-13B.
|
173 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
174 |
### Prompt format:
|
175 |
|
176 |
Many options:
|
|
|
1 |
---
|
2 |
+
datasets:
|
3 |
+
- totally-not-an-llm/EverythingLM-data-V2
|
4 |
+
- garage-bAInd/Open-Platypus
|
5 |
+
- Open-Orca/OpenOrca
|
6 |
inference: false
|
7 |
license: llama2
|
8 |
model_creator: Kai Howard
|
|
|
41 |
|
42 |
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
43 |
|
44 |
+
Please use the GGUF models instead.
|
45 |
### About GGML
|
46 |
|
47 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
|
|
66 |
|
67 |
USER: {prompt}
|
68 |
ASSISTANT:
|
69 |
+
|
70 |
```
|
71 |
|
72 |
<!-- compatibility_ggml start -->
|
|
|
124 |
For compatibility with latest llama.cpp, please use GGUF files instead.
|
125 |
|
126 |
```
|
127 |
+
./main -t 10 -ngl 32 -m puddlejumper-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: Write a story about llamas\nASSISTANT:"
|
128 |
```
|
129 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
130 |
|
|
|
163 |
|
164 |
**Special thanks to**: Aemon Algiz.
|
165 |
|
166 |
+
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
|
167 |
|
168 |
|
169 |
Thank you to all my generous patrons and donaters!
|
|
|
177 |
|
178 |
Merge of EverythingLM-V2-13b QLoRa and OpenOrca-Platypus2-13B.
|
179 |
|
180 |
+
Quants (Thanks TheBloke)
|
181 |
+
|
182 |
+
https://huggingface.co/TheBloke/PuddleJumper-13B-GPTQ
|
183 |
+
|
184 |
+
https://huggingface.co/TheBloke/PuddleJumper-13B-GGML
|
185 |
+
|
186 |
+
https://huggingface.co/TheBloke/PuddleJumper-13B-GGUF
|
187 |
+
|
188 |
### Prompt format:
|
189 |
|
190 |
Many options:
|