Text Generation
Transformers
English
llama
TheBloke commited on
Commit
b3856c7
1 Parent(s): fd6e0e8

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -14
README.md CHANGED
@@ -39,6 +39,13 @@ quantized_by: TheBloke
39
 
40
  This repo contains GGML format model files for [Open-Orca's LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k).
41
 
 
 
 
 
 
 
 
42
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
43
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
44
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
@@ -50,7 +57,8 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
50
  ## Repositories available
51
 
52
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GPTQ)
53
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML)
 
54
  * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/LlongOrca-13B-16k)
55
 
56
  ## Prompt template: ChatML
@@ -61,14 +69,19 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
61
  <|im_start|>user
62
  {prompt}<|im_end|>
63
  <|im_start|>assistant
 
64
  ```
65
 
66
  <!-- compatibility_ggml start -->
67
  ## Compatibility
68
 
69
- These quantised GGML files are compatible with llama.cpp as of June 6th, commit `2d43387`.
 
 
70
 
71
- They should also be compatible with all UIs, libraries and utilities which use GGML.
 
 
72
 
73
  ## Explanation of the new k-quant methods
74
  <details>
@@ -91,17 +104,17 @@ Refer to the Provided Files table below to see what files use which methods, and
91
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
92
  | ---- | ---- | ---- | ---- | ---- | ----- |
93
  | [llongorca-13b-16k.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
94
- | [llongorca-13b-16k.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
95
- | [llongorca-13b-16k.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
96
  | [llongorca-13b-16k.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
97
  | [llongorca-13b-16k.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
98
- | [llongorca-13b-16k.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
99
- | [llongorca-13b-16k.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
100
  | [llongorca-13b-16k.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
101
  | [llongorca-13b-16k.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
102
- | [llongorca-13b-16k.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
103
- | [llongorca-13b-16k.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
104
  | [llongorca-13b-16k.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
 
 
105
  | [llongorca-13b-16k.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
106
  | [llongorca-13b-16k.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
107
 
@@ -109,10 +122,12 @@ Refer to the Provided Files table below to see what files use which methods, and
109
 
110
  ## How to run in `llama.cpp`
111
 
112
- I use the following command line; adjust for your tastes and needs:
 
 
113
 
114
  ```
115
- ./main -t 10 -ngl 32 -m llongorca-13b-16k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
116
  ```
117
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
118
 
@@ -136,10 +151,12 @@ For further support, and discussions on these models and AI in general, join us
136
 
137
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
138
 
139
- ## Thanks, and how to contribute.
140
 
141
  Thanks to the [chirper.ai](https://chirper.ai) team!
142
 
 
 
143
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
144
 
145
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
@@ -151,7 +168,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
151
 
152
  **Special thanks to**: Aemon Algiz.
153
 
154
- **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
155
 
156
 
157
  Thank you to all my generous patrons and donaters!
@@ -257,7 +274,7 @@ Commodity cost was ~$300.
257
  # Citation
258
 
259
  ```bibtex
260
- @software{lian2023llongorca13b,
261
  title = {LlongOrca13B: Llama2-13B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset},
262
  author = {Alpin Dale and Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
263
  year = {2023},
 
39
 
40
  This repo contains GGML format model files for [Open-Orca's LlongOrca 13B 16K](https://huggingface.co/Open-Orca/LlongOrca-13B-16k).
41
 
42
+ ### Important note regarding GGML files.
43
+
44
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
45
+
46
+ Please use the GGUF models instead.
47
+ ### About GGML
48
+
49
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
50
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
51
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
 
57
  ## Repositories available
58
 
59
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GPTQ)
60
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGUF)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML)
62
  * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/LlongOrca-13B-16k)
63
 
64
  ## Prompt template: ChatML
 
69
  <|im_start|>user
70
  {prompt}<|im_end|>
71
  <|im_start|>assistant
72
+
73
  ```
74
 
75
  <!-- compatibility_ggml start -->
76
  ## Compatibility
77
 
78
+ These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
79
+
80
+ For support with latest llama.cpp, please use GGUF files instead.
81
 
82
+ The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
83
+
84
+ As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
85
 
86
  ## Explanation of the new k-quant methods
87
  <details>
 
104
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
105
  | ---- | ---- | ---- | ---- | ---- | ----- |
106
  | [llongorca-13b-16k.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
107
  | [llongorca-13b-16k.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
108
+ | [llongorca-13b-16k.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
109
+ | [llongorca-13b-16k.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
110
  | [llongorca-13b-16k.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
 
 
111
  | [llongorca-13b-16k.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
112
+ | [llongorca-13b-16k.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
113
+ | [llongorca-13b-16k.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
114
  | [llongorca-13b-16k.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
 
115
  | [llongorca-13b-16k.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
116
+ | [llongorca-13b-16k.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
117
+ | [llongorca-13b-16k.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
118
  | [llongorca-13b-16k.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
119
  | [llongorca-13b-16k.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/LlongOrca-13B-16K-GGML/blob/main/llongorca-13b-16k.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
120
 
 
122
 
123
  ## How to run in `llama.cpp`
124
 
125
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
126
+
127
+ For compatibility with latest llama.cpp, please use GGUF files instead.
128
 
129
  ```
130
+ ./main -t 10 -ngl 32 -m llongorca-13b-16k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
131
  ```
132
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
133
 
 
151
 
152
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
153
 
154
+ ## Thanks, and how to contribute
155
 
156
  Thanks to the [chirper.ai](https://chirper.ai) team!
157
 
158
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
159
+
160
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
161
 
162
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
 
168
 
169
  **Special thanks to**: Aemon Algiz.
170
 
171
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
172
 
173
 
174
  Thank you to all my generous patrons and donaters!
 
274
  # Citation
275
 
276
  ```bibtex
277
+ @software{dale2023llongorca13b,
278
  title = {LlongOrca13B: Llama2-13B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset},
279
  author = {Alpin Dale and Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
280
  year = {2023},