TheBloke commited on
Commit
18c8848
1 Parent(s): 8a9be43

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -8
README.md CHANGED
@@ -9,6 +9,18 @@ model_creator: royallab
9
  model_name: Pygmalion 2 13B SuperCOT2
10
  model_type: llama
11
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
12
  quantized_by: TheBloke
13
  tags:
14
  - llama
@@ -63,6 +75,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
63
  <!-- repositories-available start -->
64
  ## Repositories available
65
 
 
66
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT2-GPTQ)
67
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT2-GGUF)
68
  * [royallab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT2)
@@ -82,15 +95,8 @@ Below is an instruction that describes a task. Write a response that appropriate
82
  ```
83
 
84
  <!-- prompt-template end -->
85
- <!-- licensing start -->
86
- ## Licensing
87
-
88
- The creator of the source model has listed its license as `llama2`, and this quantization has therefore used that same license.
89
 
90
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
91
 
92
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [royallab's Pygmalion 2 13B SuperCOT2](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT2).
93
- <!-- licensing end -->
94
  <!-- compatibility_gguf start -->
95
  ## Compatibility
96
 
@@ -137,6 +143,63 @@ Refer to the Provided Files table below to see what files use which methods, and
137
 
138
  <!-- README_GGUF.md-provided-files end -->
139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  <!-- README_GGUF.md-how-to-run start -->
141
  ## Example `llama.cpp` command
142
 
@@ -222,7 +285,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
222
 
223
  **Special thanks to**: Aemon Algiz.
224
 
225
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
226
 
227
 
228
  Thank you to all my generous patrons and donaters!
 
9
  model_name: Pygmalion 2 13B SuperCOT2
10
  model_type: llama
11
  pipeline_tag: text-generation
12
+ prompt_template: 'Below is an instruction that describes a task. Write a response
13
+ that appropriately completes the request.
14
+
15
+
16
+ ### Instruction:
17
+
18
+ {prompt}
19
+
20
+
21
+ ### Response:
22
+
23
+ '
24
  quantized_by: TheBloke
25
  tags:
26
  - llama
 
75
  <!-- repositories-available start -->
76
  ## Repositories available
77
 
78
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT2-AWQ)
79
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT2-GPTQ)
80
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT2-GGUF)
81
  * [royallab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT2)
 
95
  ```
96
 
97
  <!-- prompt-template end -->
 
 
 
 
98
 
 
99
 
 
 
100
  <!-- compatibility_gguf start -->
101
  ## Compatibility
102
 
 
143
 
144
  <!-- README_GGUF.md-provided-files end -->
145
 
146
+ <!-- README_GGUF.md-how-to-download start -->
147
+ ## How to download GGUF files
148
+
149
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
150
+
151
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
152
+ - LM Studio
153
+ - LoLLMS Web UI
154
+ - Faraday.dev
155
+
156
+ ### In `text-generation-webui`
157
+
158
+ Under Download Model, you can enter the model repo: TheBloke/Pygmalion-2-13B-SuperCOT2-GGUF and below it, a specific filename to download, such as: pygmalion-2-13b-supercot2.q4_K_M.gguf.
159
+
160
+ Then click Download.
161
+
162
+ ### On the command line, including multiple files at once
163
+
164
+ I recommend using the `huggingface-hub` Python library:
165
+
166
+ ```shell
167
+ pip3 install huggingface-hub>=0.17.1
168
+ ```
169
+
170
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
171
+
172
+ ```shell
173
+ huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT2-GGUF pygmalion-2-13b-supercot2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
174
+ ```
175
+
176
+ <details>
177
+ <summary>More advanced huggingface-cli download usage</summary>
178
+
179
+ You can also download multiple files at once with a pattern:
180
+
181
+ ```shell
182
+ huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
183
+ ```
184
+
185
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
186
+
187
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
188
+
189
+ ```shell
190
+ pip3 install hf_transfer
191
+ ```
192
+
193
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
194
+
195
+ ```shell
196
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT2-GGUF pygmalion-2-13b-supercot2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
197
+ ```
198
+
199
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
200
+ </details>
201
+ <!-- README_GGUF.md-how-to-download end -->
202
+
203
  <!-- README_GGUF.md-how-to-run start -->
204
  ## Example `llama.cpp` command
205
 
 
285
 
286
  **Special thanks to**: Aemon Algiz.
287
 
288
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
289
 
290
 
291
  Thank you to all my generous patrons and donaters!