TheBloke commited on
Commit
8d43a11
1 Parent(s): 63f387d

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +290 -0
README.md ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: llama2
4
+ model_creator: Meta
5
+ model_link: https://ai.meta.com/resources/models-and-libraries/llama-downloads
6
+ model_name: CodeLlama 7B Python
7
+ model_type: llama
8
+ quantized_by: TheBloke
9
+ tags:
10
+ - llama-2
11
+ - codellama
12
+ ---
13
+
14
+ <!-- header start -->
15
+ <!-- 200823 -->
16
+ <div style="width: auto; margin-left: auto; margin-right: auto">
17
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
18
+ </div>
19
+ <div style="display: flex; justify-content: space-between; width: 100%;">
20
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
22
+ </div>
23
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
24
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
25
+ </div>
26
+ </div>
27
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
28
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
29
+ <!-- header end -->
30
+
31
+ # CodeLlama 7B Python - GGML
32
+ - Model creator: [Meta](https://huggingface.co/meta-llama)
33
+ - Original model: [CodeLlama 7B Python](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
34
+
35
+ ## Description
36
+
37
+ This repo contains GGML format model files for [Meta's CodeLlama 7B Python](https://ai.meta.com/resources/models-and-libraries/llama-downloads).
38
+
39
+ ### Important note regarding GGML files.
40
+
41
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
42
+
43
+ ### About GGML
44
+
45
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
46
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
47
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
48
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
49
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
50
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
51
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
52
+
53
+ ## Repositories available
54
+
55
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GPTQ)
56
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGUF)
57
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML)
58
+ * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CodeLlama-7B-Python-fp16)
59
+
60
+ ## Prompt template: TBC
61
+
62
+ ```
63
+ Info on prompt template will be added shortly.
64
+ ```
65
+
66
+ <!-- compatibility_ggml start -->
67
+ ## Compatibility
68
+
69
+ These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
70
+
71
+ For support with latest llama.cpp, please use GGUF files instead.
72
+
73
+ The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
74
+
75
+ As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
76
+
77
+ ## Explanation of the new k-quant methods
78
+ <details>
79
+ <summary>Click to see details</summary>
80
+
81
+ The new methods available are:
82
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
83
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
84
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
85
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
86
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
87
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
88
+
89
+ Refer to the Provided Files table below to see what files use which methods, and how.
90
+ </details>
91
+ <!-- compatibility_ggml end -->
92
+
93
+ ## Provided files
94
+
95
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
96
+ | ---- | ---- | ---- | ---- | ---- | ----- |
97
+ | [codellama-7b-python.ggmlv3.Q2_K.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q2_K.bin) | Q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
98
+ | [codellama-7b-python.ggmlv3.Q3_K_S.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q3_K_S.bin) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
99
+ | [codellama-7b-python.ggmlv3.Q3_K_M.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q3_K_M.bin) | Q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
100
+ | [codellama-7b-python.ggmlv3.Q3_K_L.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q3_K_L.bin) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
101
+ | [codellama-7b-python.ggmlv3.Q4_0.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q4_0.bin) | Q4_0 | 4 | 3.83 GB| 6.33 GB | Original quant method, 4-bit. |
102
+ | [codellama-7b-python.ggmlv3.Q4_K_S.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q4_K_S.bin) | Q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
103
+ | [codellama-7b-python.ggmlv3.Q4_K_M.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q4_K_M.bin) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
104
+ | [codellama-7b-python.ggmlv3.Q4_1.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q4_1.bin) | Q4_1 | 4 | 4.24 GB| 6.74 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
105
+ | [codellama-7b-python.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 4.65 GB| 7.15 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
106
+ | [codellama-7b-python.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
107
+ | [codellama-7b-python.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
108
+ | [codellama-7b-python.ggmlv3.Q5_1.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q5_1.bin) | Q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
109
+ | [codellama-7b-python.ggmlv3.Q6_K.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q6_K.bin) | Q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
110
+ | [codellama-7b-python.ggmlv3.Q8_0.bin](https://huggingface.co/TheBloke/CodeLlama-7B-Python-GGML/blob/main/codellama-7b-python.ggmlv3.Q8_0.bin) | Q8_0 | 8 | 7.13 GB| 9.63 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
111
+
112
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
113
+
114
+ ## How to run in `llama.cpp`
115
+
116
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
117
+
118
+ For compatibility with latest llama.cpp, please use GGUF files instead.
119
+
120
+ ```
121
+ ./main -t 10 -ngl 32 -m codellama-7b-python.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
122
+ ```
123
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
124
+
125
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
126
+
127
+ Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
128
+
129
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
130
+
131
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
132
+
133
+ ## How to run in `text-generation-webui`
134
+
135
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
136
+
137
+ <!-- footer start -->
138
+ <!-- 200823 -->
139
+ ## Discord
140
+
141
+ For further support, and discussions on these models and AI in general, join us at:
142
+
143
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
144
+
145
+ ## Thanks, and how to contribute.
146
+
147
+ Thanks to the [chirper.ai](https://chirper.ai) team!
148
+
149
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
150
+
151
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
152
+
153
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
154
+
155
+ * Patreon: https://patreon.com/TheBlokeAI
156
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
157
+
158
+ **Special thanks to**: Aemon Algiz.
159
+
160
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
161
+
162
+
163
+ Thank you to all my generous patrons and donaters!
164
+
165
+ And thank you again to a16z for their generous grant.
166
+
167
+ <!-- footer end -->
168
+
169
+ # Original model card: Meta's CodeLlama 7B Python
170
+
171
+
172
+ <!-- header start -->
173
+ <!-- 200823 -->
174
+ <div style="width: auto; margin-left: auto; margin-right: auto">
175
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
176
+ </div>
177
+ <div style="display: flex; justify-content: space-between; width: 100%;">
178
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
179
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
180
+ </div>
181
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
182
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
183
+ </div>
184
+ </div>
185
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
186
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
187
+ <!-- header end -->
188
+
189
+ # CodeLlama 7B-Python fp16
190
+ - Model creator: [Meta](https://ai.meta.com/llama/)
191
+
192
+ ## Description
193
+
194
+ This is Transformers/HF format fp16 weights for CodeLlama 7B-Python. It is the result of downloading CodeLlama 7B-Python from [Meta](https://ai.meta.com/blog/code-llama-large-language-model-coding/) and converting to HF using `convert_llama_weights_to_hf.py`.
195
+
196
+ Quantisations will be coming shortly.
197
+
198
+ Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
199
+
200
+ Credit to @emozilla for creating the necessary modelling code to achieve this!
201
+
202
+ ## Prompt template: TBC
203
+
204
+
205
+ <!-- footer start -->
206
+ <!-- 200823 -->
207
+ ## Discord
208
+
209
+ For further support, and discussions on these models and AI in general, join us at:
210
+
211
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
212
+
213
+ ## Thanks, and how to contribute.
214
+
215
+ Thanks to the [chirper.ai](https://chirper.ai) team!
216
+
217
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
218
+
219
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
220
+
221
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
222
+
223
+ * Patreon: https://patreon.com/TheBlokeAI
224
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
225
+
226
+ **Special thanks to**: Aemon Algiz.
227
+
228
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
229
+
230
+
231
+ Thank you to all my generous patrons and donaters!
232
+
233
+ And thank you again to a16z for their generous grant.
234
+
235
+ <!-- footer end -->
236
+
237
+ # Original model card
238
+
239
+ # Code Llama
240
+
241
+ ## **Model Details**
242
+
243
+ **Model Developers** Meta AI
244
+
245
+ **Variations** Code Llama comes in three model sizes, and three variants:
246
+ 1) Code Llama: our base models designed for general code synthesis and understanding
247
+ 2) Code Llama - Python: designed specifically for Python
248
+ 3) Code Llama - Instruct: for instruction following and safer deployment
249
+
250
+ All variants are available in sizes of 7B, 13B and 34B parameters.
251
+
252
+ **Input** Models input text only.
253
+
254
+ **Output** Models output text only.
255
+
256
+ **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time.
257
+
258
+ **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
259
+
260
+ **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
261
+
262
+ **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
263
+
264
+ **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
265
+
266
+ **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)).
267
+
268
+ ## **Intended Use**
269
+ **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
270
+
271
+ **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
272
+
273
+ ## **Hardware and Software**
274
+ **Training Factors**
275
+ We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
276
+
277
+ **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
278
+
279
+ **Training data**
280
+ All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
281
+ Code Llama - Instruct uses additional instruction fine-tuning data.
282
+
283
+ **Evaluation Results**
284
+ See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
285
+
286
+ ## **Ethical Considerations and Limitations**
287
+ Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
288
+
289
+ Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
290
+