Transformers
English
llama
TheBloke commited on
Commit
65cc214
·
1 Parent(s): 5765d9b

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +280 -0
README.md ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - OpenAssistant/oasst1
4
+ - shahules786/orca-best
5
+ inference: false
6
+ language:
7
+ - en
8
+ license: llama2
9
+ model_creator: OpenAssistant
10
+ model_link: https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10
11
+ model_name: CodeLlama 13B OASST SFT v10
12
+ model_type: llama
13
+ quantized_by: TheBloke
14
+ ---
15
+
16
+ <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
+ </div>
21
+ <div style="display: flex; justify-content: space-between; width: 100%;">
22
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
24
+ </div>
25
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
26
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
27
+ </div>
28
+ </div>
29
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
30
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
31
+ <!-- header end -->
32
+
33
+ # CodeLlama 13B OASST SFT v10 - GGML
34
+ - Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
35
+ - Original model: [CodeLlama 13B OASST SFT v10](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10)
36
+
37
+ ## Description
38
+
39
+ This repo contains GGML format model files for [OpenAssistant's CodeLlama 13B OASST SFT v10](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10).
40
+
41
+ ### Important note regarding GGML files.
42
+
43
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
44
+
45
+ ### About GGML
46
+
47
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
48
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
49
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
50
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
51
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
52
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
53
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
54
+
55
+ ## Repositories available
56
+
57
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ)
58
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF)
59
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML)
60
+ * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10)
61
+
62
+ ## Prompt template: ChatML
63
+
64
+ ```
65
+ <|im_start|>system
66
+ {system_message}<|im_end|>
67
+ <|im_start|>user
68
+ {prompt}<|im_end|>
69
+ <|im_start|>assistant
70
+
71
+ ```
72
+
73
+ <!-- compatibility_ggml start -->
74
+ ## Compatibility
75
+
76
+ These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
77
+
78
+ For support with latest llama.cpp, please use GGUF files instead.
79
+
80
+ The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
81
+
82
+ As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
83
+
84
+ ## Explanation of the new k-quant methods
85
+ <details>
86
+ <summary>Click to see details</summary>
87
+
88
+ The new methods available are:
89
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
90
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
91
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
92
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
93
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
94
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
95
+
96
+ Refer to the Provided Files table below to see what files use which methods, and how.
97
+ </details>
98
+ <!-- compatibility_ggml end -->
99
+
100
+ ## Provided files
101
+
102
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
103
+ | ---- | ---- | ---- | ---- | ---- | ----- |
104
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q2_K.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q2_K.bin) | Q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
105
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q3_K_S.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q3_K_S.bin) | Q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
106
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q3_K_M.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q3_K_M.bin) | Q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
107
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q3_K_L.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q3_K_L.bin) | Q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
108
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q4_0.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q4_0.bin) | Q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
109
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q4_K_S.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q4_K_S.bin) | Q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
110
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q4_K_M.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q4_K_M.bin) | Q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
111
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q4_1.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q4_1.bin) | Q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
112
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
113
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 9.15 GB| 11.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
114
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
115
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q5_1.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q5_1.bin) | Q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
116
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q6_K.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q6_K.bin) | Q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
117
+ | [codellama-13b-oasst-sft-v10.ggmlv3.Q8_0.bin](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML/blob/main/codellama-13b-oasst-sft-v10.ggmlv3.Q8_0.bin) | Q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
118
+
119
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
120
+
121
+ ## How to run in `llama.cpp`
122
+
123
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
124
+
125
+ For compatibility with latest llama.cpp, please use GGUF files instead.
126
+
127
+ ```
128
+ ./main -t 10 -ngl 32 -m codellama-13b-oasst-sft-v10.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
129
+ ```
130
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
131
+
132
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
133
+
134
+ Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
135
+
136
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
137
+
138
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
139
+
140
+ ## How to run in `text-generation-webui`
141
+
142
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
143
+
144
+ <!-- footer start -->
145
+ <!-- 200823 -->
146
+ ## Discord
147
+
148
+ For further support, and discussions on these models and AI in general, join us at:
149
+
150
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
151
+
152
+ ## Thanks, and how to contribute.
153
+
154
+ Thanks to the [chirper.ai](https://chirper.ai) team!
155
+
156
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
157
+
158
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
159
+
160
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
161
+
162
+ * Patreon: https://patreon.com/TheBlokeAI
163
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
164
+
165
+ **Special thanks to**: Aemon Algiz.
166
+
167
+ **Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
168
+
169
+
170
+ Thank you to all my generous patrons and donaters!
171
+
172
+ And thank you again to a16z for their generous grant.
173
+
174
+ <!-- footer end -->
175
+
176
+ # Original model card: OpenAssistant's CodeLlama 13B OASST SFT v10
177
+
178
+ # Open-Assistant CodeLlama 13B SFT v10
179
+
180
+ This model is an Open-Assistant fine-tuning of Meta's CodeLlama 13B LLM.
181
+
182
+ **Note**: Due to the new RoPE Theta value (1e6 instead of 1e4), for correct results you must load this model with `trust_remote_code=True` or use the latest main branch of Huggingface transformers (until version 4.33 is released).
183
+
184
+ ## Model Details
185
+
186
+ - **Finetuned from:** [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) via [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM)
187
+ - **Model type:** Causal decoder-only transformer language model
188
+ - **Language:** English
189
+ - **Weights & Biases training logs:** 6123 steps, BS 64 [run56_oa_llamacode](https://wandb.ai/open-assistant/public-sft/runs/run56_oa_llamacode)
190
+ - **Demo:** [Continuations for 250 random prompts (without system message)](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-26_OpenAssistant_codellama-13b-oasst-sft-v10_sampling_noprefix2.json)
191
+ - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
192
+ - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
193
+
194
+ ## Prompting / Prompt Template
195
+
196
+ Due to public demand (see [survey](https://twitter.com/erhartford/status/1682403597525430272)) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) standard prompt format.
197
+ We hope that this leads to greater compatibility with chat inference/frontend applications.
198
+
199
+ Prompt dialogue template:
200
+
201
+ ```
202
+ """
203
+ <|im_start|>system
204
+ {system_message}<|im_end|>
205
+ <|im_start|>user
206
+ {prompt}<|im_end|>
207
+ <|im_start|>assistant
208
+ """
209
+ ```
210
+
211
+ The model input can contain multiple conversation turns between user and assistant, e.g.
212
+ ```
213
+ <|im_start|>user
214
+ {prompt 1}<|im_end|>
215
+ <|im_start|>assistant
216
+ {reply 1}<|im_end|>
217
+ <|im_start|>user
218
+ {prompt 2}<|im_end|>
219
+ <|im_start|>assistant
220
+ (...)
221
+ ```
222
+
223
+ The model was partly trained with orca system messages.
224
+ For inference we recommend to use the official [Llama2 system message](https://github.com/facebookresearch/llama/blob/ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7/example_chat_completion.py#L57-L61):
225
+ ```
226
+ <|im_start|>system
227
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
228
+
229
+ If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
230
+ <|im_end|>
231
+ ```
232
+
233
+ ### Credits & Special Thanks
234
+
235
+ - Thanks to [Meta AI](https://ai.meta.com/) for training and releasing the CodeLLlama model.
236
+ - Distributed training support was provided by EPFL's [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/), and [Natural Language Processing Lab](https://nlp.epfl.ch/).
237
+ - The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
238
+ - [rombodawg](https://huggingface.co/rombodawg) curated the [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) dataset.
239
+ - [ehartford](https://huggingface.co/ehartford) generated and published the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin).
240
+ - [shahules786](https://github.com/shahules786) de-duped and filtered the Dolphin and Megacode dataset with a clustering/controid approach and generated orca-best & bestofmegacode.
241
+ - [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.
242
+
243
+ ## Ethical Considerations and Limitations
244
+
245
+ Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios.
246
+ For these reasons, as with all LLMs, the potential outputs of codellama-13b-oasst-sft-v10 cannot be predicted
247
+ in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
248
+ to user prompts. Therefore, before deploying any applications of codellama-13b-oasst-sft-v10, developers should
249
+ perform safety testing and tuning tailored to their specific applications of the model.
250
+
251
+ Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
252
+
253
+ ## Configuration Details
254
+
255
+ The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: [model/pretokenizer](https://github.com/LAION-AI/Open-Assistant/tree/main/model/pretokenizer).
256
+
257
+
258
+ ### Pretokenizer Configuration
259
+
260
+
261
+ ```
262
+ orca_megacode_oasst_best:
263
+ datasets:
264
+ - orca-chat:
265
+ val_split: 0.01
266
+ max_val_set: 1000
267
+ - bestofmegacode:
268
+ val_split: 0.01
269
+ max_val_set: 1000
270
+ - oasst_export:
271
+ lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
272
+ #hf_dataset_name: OpenAssistant/oasst1
273
+ input_file_path: 2023-08-25_oasst_ready.jsonl.gz
274
+ top_k: 1
275
+ val_split: 0.025
276
+ output_dir: "output/orca_megacode_oasst_best"
277
+ filename_prefix: "orca_megacode_oasst_best"
278
+ min_assistant_tokens: 1
279
+ ```
280
+