TheBloke commited on
Commit
0eee9e9
1 Parent(s): 41c36f9

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -9
README.md CHANGED
@@ -5,6 +5,9 @@ license: llama2
5
  model_creator: posicube
6
  model_name: Llama2 Chat AYT 13B
7
  model_type: llama
 
 
 
8
  quantized_by: TheBloke
9
  ---
10
 
@@ -56,6 +59,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
56
  <!-- repositories-available start -->
57
  ## Repositories available
58
 
 
59
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GPTQ)
60
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GGUF)
61
  * [posicube's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/posicube/Llama2-chat-AYT-13B)
@@ -70,15 +74,8 @@ Here is an incomplate list of clients and libraries that are known to support GG
70
  ```
71
 
72
  <!-- prompt-template end -->
73
- <!-- licensing start -->
74
- ## Licensing
75
-
76
- The creator of the source model has listed its license as `llama2`, and this quantization has therefore used that same license.
77
 
78
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
79
 
80
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [posicube's Llama2 Chat AYT 13B](https://huggingface.co/posicube/Llama2-chat-AYT-13B).
81
- <!-- licensing end -->
82
  <!-- compatibility_gguf start -->
83
  ## Compatibility
84
 
@@ -125,6 +122,63 @@ Refer to the Provided Files table below to see what files use which methods, and
125
 
126
  <!-- README_GGUF.md-provided-files end -->
127
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
  <!-- README_GGUF.md-how-to-run start -->
129
  ## Example `llama.cpp` command
130
 
@@ -210,7 +264,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
210
 
211
  **Special thanks to**: Aemon Algiz.
212
 
213
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
214
 
215
 
216
  Thank you to all my generous patrons and donaters!
@@ -222,6 +276,65 @@ And thank you again to a16z for their generous grant.
222
  <!-- original-model-card start -->
223
  # Original model card: posicube's Llama2 Chat AYT 13B
224
 
225
- We will update this soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
226
 
227
  <!-- original-model-card end -->
 
5
  model_creator: posicube
6
  model_name: Llama2 Chat AYT 13B
7
  model_type: llama
8
+ prompt_template: '{prompt}
9
+
10
+ '
11
  quantized_by: TheBloke
12
  ---
13
 
 
59
  <!-- repositories-available start -->
60
  ## Repositories available
61
 
62
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-AWQ)
63
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GPTQ)
64
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GGUF)
65
  * [posicube's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/posicube/Llama2-chat-AYT-13B)
 
74
  ```
75
 
76
  <!-- prompt-template end -->
 
 
 
 
77
 
 
78
 
 
 
79
  <!-- compatibility_gguf start -->
80
  ## Compatibility
81
 
 
122
 
123
  <!-- README_GGUF.md-provided-files end -->
124
 
125
+ <!-- README_GGUF.md-how-to-download start -->
126
+ ## How to download GGUF files
127
+
128
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
129
+
130
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
131
+ - LM Studio
132
+ - LoLLMS Web UI
133
+ - Faraday.dev
134
+
135
+ ### In `text-generation-webui`
136
+
137
+ Under Download Model, you can enter the model repo: TheBloke/Llama2-Chat-AYT-13B-GGUF and below it, a specific filename to download, such as: llama2-chat-ayt-13b.q4_K_M.gguf.
138
+
139
+ Then click Download.
140
+
141
+ ### On the command line, including multiple files at once
142
+
143
+ I recommend using the `huggingface-hub` Python library:
144
+
145
+ ```shell
146
+ pip3 install huggingface-hub>=0.17.1
147
+ ```
148
+
149
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
150
+
151
+ ```shell
152
+ huggingface-cli download TheBloke/Llama2-Chat-AYT-13B-GGUF llama2-chat-ayt-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
153
+ ```
154
+
155
+ <details>
156
+ <summary>More advanced huggingface-cli download usage</summary>
157
+
158
+ You can also download multiple files at once with a pattern:
159
+
160
+ ```shell
161
+ huggingface-cli download TheBloke/Llama2-Chat-AYT-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
162
+ ```
163
+
164
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
165
+
166
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
167
+
168
+ ```shell
169
+ pip3 install hf_transfer
170
+ ```
171
+
172
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
173
+
174
+ ```shell
175
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama2-Chat-AYT-13B-GGUF llama2-chat-ayt-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
176
+ ```
177
+
178
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
179
+ </details>
180
+ <!-- README_GGUF.md-how-to-download end -->
181
+
182
  <!-- README_GGUF.md-how-to-run start -->
183
  ## Example `llama.cpp` command
184
 
 
264
 
265
  **Special thanks to**: Aemon Algiz.
266
 
267
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
268
 
269
 
270
  Thank you to all my generous patrons and donaters!
 
276
  <!-- original-model-card start -->
277
  # Original model card: posicube's Llama2 Chat AYT 13B
278
 
279
+
280
+ This is a model diverged from Llama-2-13b-chat-hf. We hypotheize that if we find a method to ensemble the top rankers in each benchmark effectively, its performance maximizes as well. Following this intuition, we ensembled the top models in each benchmarks(ARC, MMLU and TruthFulQA) to create our model.
281
+
282
+ # Model Details
283
+ - **Developed by**: Posicube Inc.
284
+ - **Backbone Model**: LLaMA-2-13b-chat
285
+ - **Library**: HuggingFace Transformers
286
+ - **Used Dataset Details**
287
+ Orca-style datasets, Alpaca-style datasets
288
+
289
+
290
+ # Evaluation
291
+ We achieved the top ranker among 13B models at Sep-13rd 2023.
292
+
293
+ | Metric |Scores on Leaderboard| Our results |
294
+ |---------------------|---------------------|-------------|
295
+ | ARC (25-shot) | 63.31 | 63.57 |
296
+ | HellaSwag (10-shot) | 83.53 | 83.77 |
297
+ | MMLU (5-shot) | 59.67 | 59.69 |
298
+ | TruthfulQA (0-shot) | 55.8 | 55.48 |
299
+ | Avg. | 65.58 | 65.63 |
300
+
301
+ # Limitations & Biases:
302
+ Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
303
+
304
+ Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
305
+
306
+ # License Disclaimer:
307
+ This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
308
+
309
+ # Contact Us
310
+ [Posicube](https://www.posicube.com/)
311
+
312
+ # Citiation:
313
+ Please kindly cite using the following BibTeX:
314
+
315
+ ```bibtex
316
+ @misc{mukherjee2023orca,
317
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
318
+ author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
319
+ year={2023},
320
+ eprint={2306.02707},
321
+ archivePrefix={arXiv},
322
+ primaryClass={cs.CL}
323
+ }
324
+ ```
325
+
326
+ ```bibtex
327
+ @software{touvron2023llama2,
328
+ title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
329
+ author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
330
+ Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
331
+ Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
332
+ Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
333
+ Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
334
+ Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
335
+ Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
336
+ year={2023}
337
+ }
338
+ ```
339
 
340
  <!-- original-model-card end -->