jartine commited on
Commit
15ab23a
1 Parent(s): 29fa121

Add README.md to repo

Browse files
Files changed (1) hide show
  1. README.md +46 -56
README.md CHANGED
@@ -25,38 +25,36 @@ tags:
25
  <!-- header start -->
26
  <!-- 200823 -->
27
  <div style="width: auto; margin-left: auto; margin-right: auto">
28
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
29
  </div>
30
  <div style="display: flex; justify-content: space-between; width: 100%;">
31
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
32
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
33
  </div>
34
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
35
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
36
  </div>
37
  </div>
38
- <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
39
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
40
  <!-- header end -->
41
 
42
- # Phi 2 - GGUF
43
  - Model creator: [Microsoft](https://huggingface.co/microsoft)
44
  - Original model: [Phi 2](https://huggingface.co/microsoft/phi-2)
45
 
46
  <!-- description start -->
47
  ## Description
48
 
49
- This repo contains GGUF format model files for [Microsoft's Phi 2](https://huggingface.co/microsoft/phi-2).
50
 
51
- <!-- description end -->
52
- <!-- README_GGUF.md-about-gguf start -->
53
- ### About GGUF
54
 
55
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
56
 
57
- Here is an incomplete list of clients and libraries that are known to support GGUF:
58
 
59
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
60
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
61
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
62
  * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
@@ -67,12 +65,12 @@ Here is an incomplete list of clients and libraries that are known to support GG
67
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
68
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
69
 
70
- <!-- README_GGUF.md-about-gguf end -->
71
  <!-- repositories-available start -->
72
  ## Repositories available
73
 
74
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phi-2-GPTQ)
75
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/phi-2-GGUF)
76
  * [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
77
  <!-- repositories-available end -->
78
 
@@ -88,10 +86,10 @@ Output:
88
  <!-- prompt-template end -->
89
 
90
 
91
- <!-- compatibility_gguf start -->
92
  ## Compatibility
93
 
94
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
95
 
96
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
97
 
@@ -110,34 +108,34 @@ The new methods available are:
110
 
111
  Refer to the Provided Files table below to see what files use which methods, and how.
112
  </details>
113
- <!-- compatibility_gguf end -->
114
 
115
- <!-- README_GGUF.md-provided-files start -->
116
  ## Provided files
117
 
118
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
119
  | ---- | ---- | ---- | ---- | ---- | ----- |
120
- | [phi-2.Q2_K.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q2_K.gguf) | Q2_K | 2 | 1.17 GB| 3.67 GB | smallest, significant quality loss - not recommended for most purposes |
121
- | [phi-2.Q3_K_S.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q3_K_S.gguf) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss |
122
- | [phi-2.Q3_K_M.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q3_K_M.gguf) | Q3_K_M | 3 | 1.48 GB| 3.98 GB | very small, high quality loss |
123
- | [phi-2.Q4_0.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q4_0.gguf) | Q4_0 | 4 | 1.60 GB| 4.10 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
124
- | [phi-2.Q3_K_L.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q3_K_L.gguf) | Q3_K_L | 3 | 1.60 GB| 4.10 GB | small, substantial quality loss |
125
- | [phi-2.Q4_K_S.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q4_K_S.gguf) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss |
126
- | [phi-2.Q4_K_M.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q4_K_M.gguf) | Q4_K_M | 4 | 1.79 GB| 4.29 GB | medium, balanced quality - recommended |
127
- | [phi-2.Q5_0.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q5_0.gguf) | Q5_0 | 5 | 1.93 GB| 4.43 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
128
- | [phi-2.Q5_K_S.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q5_K_S.gguf) | Q5_K_S | 5 | 1.93 GB| 4.43 GB | large, low quality loss - recommended |
129
- | [phi-2.Q5_K_M.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q5_K_M.gguf) | Q5_K_M | 5 | 2.07 GB| 4.57 GB | large, very low quality loss - recommended |
130
- | [phi-2.Q6_K.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q6_K.gguf) | Q6_K | 6 | 2.29 GB| 4.79 GB | very large, extremely low quality loss |
131
- | [phi-2.Q8_0.gguf](https://huggingface.co/TheBloke/phi-2-GGUF/blob/main/phi-2.Q8_0.gguf) | Q8_0 | 8 | 2.96 GB| 5.46 GB | very large, extremely low quality loss - not recommended |
132
 
133
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
134
 
135
 
136
 
137
- <!-- README_GGUF.md-provided-files end -->
138
 
139
- <!-- README_GGUF.md-how-to-download start -->
140
- ## How to download GGUF files
141
 
142
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
143
 
@@ -149,7 +147,7 @@ The following clients/libraries will automatically download models for you, prov
149
 
150
  ### In `text-generation-webui`
151
 
152
- Under Download Model, you can enter the model repo: TheBloke/phi-2-GGUF and below it, a specific filename to download, such as: phi-2.Q4_K_M.gguf.
153
 
154
  Then click Download.
155
 
@@ -164,7 +162,7 @@ pip3 install huggingface-hub
164
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
165
 
166
  ```shell
167
- huggingface-cli download TheBloke/phi-2-GGUF phi-2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
168
  ```
169
 
170
  <details>
@@ -173,7 +171,7 @@ huggingface-cli download TheBloke/phi-2-GGUF phi-2.Q4_K_M.gguf --local-dir . --l
173
  You can also download multiple files at once with a pattern:
174
 
175
  ```shell
176
- huggingface-cli download TheBloke/phi-2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
177
  ```
178
 
179
  For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
@@ -187,25 +185,25 @@ pip3 install hf_transfer
187
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
188
 
189
  ```shell
190
- HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/phi-2-GGUF phi-2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
191
  ```
192
 
193
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
194
  </details>
195
- <!-- README_GGUF.md-how-to-download end -->
196
 
197
- <!-- README_GGUF.md-how-to-run start -->
198
  ## Example `llama.cpp` command
199
 
200
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
201
 
202
  ```shell
203
- ./main -ngl 35 -m phi-2.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Instruct: {prompt}\nOutput:"
204
  ```
205
 
206
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
207
 
208
- Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
209
 
210
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
211
 
@@ -217,7 +215,7 @@ Further instructions can be found in the text-generation-webui documentation, he
217
 
218
  ## How to run from Python code
219
 
220
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
221
 
222
  ### How to load this model in Python code, using llama-cpp-python
223
 
@@ -253,7 +251,7 @@ from llama_cpp import Llama
253
 
254
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
255
  llm = Llama(
256
- model_path="./phi-2.Q4_K_M.gguf", # Download the model file first
257
  n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
258
  n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
259
  n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
@@ -269,7 +267,7 @@ output = llm(
269
 
270
  # Chat Completion API
271
 
272
- llm = Llama(model_path="./phi-2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
273
  llm.create_chat_completion(
274
  messages = [
275
  {"role": "system", "content": "You are a story writing assistant."},
@@ -288,7 +286,7 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
288
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
289
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
290
 
291
- <!-- README_GGUF.md-how-to-run end -->
292
 
293
  <!-- footer start -->
294
  <!-- 200823 -->
@@ -296,31 +294,23 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
296
 
297
  For further support, and discussions on these models and AI in general, join us at:
298
 
299
- [TheBloke AI's Discord server](https://discord.gg/theblokeai)
300
 
301
  ## Thanks, and how to contribute
302
 
303
- Thanks to the [chirper.ai](https://chirper.ai) team!
304
 
305
- Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
306
 
307
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
308
 
309
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
310
 
311
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
312
 
313
- * Patreon: https://patreon.com/TheBlokeAI
314
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
315
 
316
- **Special thanks to**: Aemon Algiz.
317
 
318
- **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
319
 
320
 
321
- Thank you to all my generous patrons and donaters!
322
 
323
- And thank you again to a16z for their generous grant.
324
 
325
  <!-- footer end -->
326
 
 
25
  <!-- header start -->
26
  <!-- 200823 -->
27
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
28
  </div>
29
  <div style="display: flex; justify-content: space-between; width: 100%;">
30
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
31
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
32
  </div>
33
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
 
34
  </div>
35
  </div>
36
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://mozilla.org">mozilla</a></p></div>
37
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
38
  <!-- header end -->
39
 
40
+ # Phi 2 - llamafile
41
  - Model creator: [Microsoft](https://huggingface.co/microsoft)
42
  - Original model: [Phi 2](https://huggingface.co/microsoft/phi-2)
43
 
44
  <!-- description start -->
45
  ## Description
46
 
47
+ This repo contains llamafile format model files for [Microsoft's Phi 2](https://huggingface.co/microsoft/phi-2).
48
 
49
+ WARNING: This README may contain inaccuracies. It was generated automatically by forking <a href=/TheBloke/phi-2-GGUF>TheBloke/phi-2-GGUF</a> and piping the README through sed. Errors should be reported to jartine, and do not reflect TheBloke. You can also support his work on [Patreon](https://www.patreon.com/TheBlokeAI).
50
+ <!-- README_llamafile.md-about-llamafile start -->
51
+ ### About llamafile
52
 
53
+ llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
54
 
55
+ Here is an incomplete list of clients and libraries that are known to support llamafile:
56
 
57
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for llamafile. Offers a CLI and a server option.
58
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
59
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
60
  * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
 
65
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
66
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
67
 
68
+ <!-- README_llamafile.md-about-llamafile end -->
69
  <!-- repositories-available start -->
70
  ## Repositories available
71
 
72
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/jartine/phi-2-GPTQ)
73
+ * [2, 3, 4, 5, 6 and 8-bit llamafile models for CPU+GPU inference](https://huggingface.co/jartine/phi-2-llamafile)
74
  * [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/phi-2)
75
  <!-- repositories-available end -->
76
 
 
86
  <!-- prompt-template end -->
87
 
88
 
89
+ <!-- compatibility_llamafile start -->
90
  ## Compatibility
91
 
92
+ These quantised llamafilev2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
93
 
94
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
95
 
 
108
 
109
  Refer to the Provided Files table below to see what files use which methods, and how.
110
  </details>
111
+ <!-- compatibility_llamafile end -->
112
 
113
+ <!-- README_llamafile.md-provided-files start -->
114
  ## Provided files
115
 
116
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
117
  | ---- | ---- | ---- | ---- | ---- | ----- |
118
+ | [phi-2.Q2_K.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q2_K.llamafile) | Q2_K | 2 | 1.17 GB| 3.67 GB | smallest, significant quality loss - not recommended for most purposes |
119
+ | [phi-2.Q3_K_S.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q3_K_S.llamafile) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss |
120
+ | [phi-2.Q3_K_M.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q3_K_M.llamafile) | Q3_K_M | 3 | 1.48 GB| 3.98 GB | very small, high quality loss |
121
+ | [phi-2.Q4_0.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q4_0.llamafile) | Q4_0 | 4 | 1.60 GB| 4.10 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
122
+ | [phi-2.Q3_K_L.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q3_K_L.llamafile) | Q3_K_L | 3 | 1.60 GB| 4.10 GB | small, substantial quality loss |
123
+ | [phi-2.Q4_K_S.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q4_K_S.llamafile) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss |
124
+ | [phi-2.Q4_K_M.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q4_K_M.llamafile) | Q4_K_M | 4 | 1.79 GB| 4.29 GB | medium, balanced quality - recommended |
125
+ | [phi-2.Q5_0.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q5_0.llamafile) | Q5_0 | 5 | 1.93 GB| 4.43 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
126
+ | [phi-2.Q5_K_S.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q5_K_S.llamafile) | Q5_K_S | 5 | 1.93 GB| 4.43 GB | large, low quality loss - recommended |
127
+ | [phi-2.Q5_K_M.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q5_K_M.llamafile) | Q5_K_M | 5 | 2.07 GB| 4.57 GB | large, very low quality loss - recommended |
128
+ | [phi-2.Q6_K.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q6_K.llamafile) | Q6_K | 6 | 2.29 GB| 4.79 GB | very large, extremely low quality loss |
129
+ | [phi-2.Q8_0.llamafile](https://huggingface.co/jartine/phi-2-llamafile/blob/main/phi-2.Q8_0.llamafile) | Q8_0 | 8 | 2.96 GB| 5.46 GB | very large, extremely low quality loss - not recommended |
130
 
131
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
132
 
133
 
134
 
135
+ <!-- README_llamafile.md-provided-files end -->
136
 
137
+ <!-- README_llamafile.md-how-to-download start -->
138
+ ## How to download llamafile files
139
 
140
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
141
 
 
147
 
148
  ### In `text-generation-webui`
149
 
150
+ Under Download Model, you can enter the model repo: jartine/phi-2-llamafile and below it, a specific filename to download, such as: phi-2.Q4_K_M.llamafile.
151
 
152
  Then click Download.
153
 
 
162
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
163
 
164
  ```shell
165
+ huggingface-cli download jartine/phi-2-llamafile phi-2.Q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
166
  ```
167
 
168
  <details>
 
171
  You can also download multiple files at once with a pattern:
172
 
173
  ```shell
174
+ huggingface-cli download jartine/phi-2-llamafile --local-dir . --local-dir-use-symlinks False --include='*Q4_K*llamafile'
175
  ```
176
 
177
  For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
 
185
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
186
 
187
  ```shell
188
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download jartine/phi-2-llamafile phi-2.Q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
189
  ```
190
 
191
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
192
  </details>
193
+ <!-- README_llamafile.md-how-to-download end -->
194
 
195
+ <!-- README_llamafile.md-how-to-run start -->
196
  ## Example `llama.cpp` command
197
 
198
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
199
 
200
  ```shell
201
+ ./main -ngl 35 -m phi-2.Q4_K_M.llamafile --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Instruct: {prompt}\nOutput:"
202
  ```
203
 
204
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
205
 
206
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the llamafile file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
207
 
208
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
209
 
 
215
 
216
  ## How to run from Python code
217
 
218
+ You can use llamafile models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
219
 
220
  ### How to load this model in Python code, using llama-cpp-python
221
 
 
251
 
252
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
253
  llm = Llama(
254
+ model_path="./phi-2.Q4_K_M.llamafile", # Download the model file first
255
  n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
256
  n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
257
  n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
 
267
 
268
  # Chat Completion API
269
 
270
+ llm = Llama(model_path="./phi-2.Q4_K_M.llamafile", chat_format="llama-2") # Set chat_format according to the model you are using
271
  llm.create_chat_completion(
272
  messages = [
273
  {"role": "system", "content": "You are a story writing assistant."},
 
286
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
287
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
288
 
289
+ <!-- README_llamafile.md-how-to-run end -->
290
 
291
  <!-- footer start -->
292
  <!-- 200823 -->
 
294
 
295
  For further support, and discussions on these models and AI in general, join us at:
296
 
297
+ [jartine AI's Discord server](https://discord.gg/FwAVVu7eJ4)
298
 
299
  ## Thanks, and how to contribute
300
 
 
301
 
 
302
 
303
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
304
 
305
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
306
 
 
307
 
 
 
308
 
 
309
 
 
310
 
311
 
 
312
 
313
+ And thank you again to mozilla for their generous grant.
314
 
315
  <!-- footer end -->
316