aashish1904 commited on
Commit
b11de02
1 Parent(s): 75c5f98

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +87 -441
README.md CHANGED
@@ -1,469 +1,115 @@
1
- # llama.cpp
2
 
3
- ![llama](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png)
4
 
5
- [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
6
- [![Server](https://github.com/ggerganov/llama.cpp/actions/workflows/server.yml/badge.svg?branch=master&event=schedule)](https://github.com/ggerganov/llama.cpp/actions/workflows/server.yml)
7
- [![Conan Center](https://shields.io/conan/v/llama-cpp)](https://conan.io/center/llama-cpp)
 
 
 
 
 
 
 
 
 
 
8
 
9
- [Roadmap](https://github.com/users/ggerganov/projects/7) / [Project status](https://github.com/ggerganov/llama.cpp/discussions/3471) / [Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205) / [ggml](https://github.com/ggerganov/ggml)
10
 
11
- Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++
12
 
13
- > [!IMPORTANT]
14
- [2024 Jun 12] Binaries have been renamed w/ a `llama-` prefix. `main` is now `llama-cli`, `server` is `llama-server`, etc (https://github.com/ggerganov/llama.cpp/pull/7809)
15
 
16
- ## Recent API changes
17
 
18
- - [2024 Jun 26] The source code and CMake build scripts have been restructured https://github.com/ggerganov/llama.cpp/pull/8006
19
- - [2024 Apr 21] `llama_token_to_piece` can now optionally render special tokens https://github.com/ggerganov/llama.cpp/pull/6807
20
- - [2024 Apr 4] State and session file functions reorganized under `llama_state_*` https://github.com/ggerganov/llama.cpp/pull/6341
21
- - [2024 Mar 26] Logits and embeddings API updated for compactness https://github.com/ggerganov/llama.cpp/pull/6122
22
- - [2024 Mar 13] Add `llama_synchronize()` + `llama_context_params.n_ubatch` https://github.com/ggerganov/llama.cpp/pull/6017
23
- - [2024 Mar 8] `llama_kv_cache_seq_rm()` returns a `bool` instead of `void`, and new `llama_n_seq_max()` returns the upper limit of acceptable `seq_id` in batches (relevant when dealing with multiple sequences) https://github.com/ggerganov/llama.cpp/pull/5328
24
- - [2024 Mar 4] Embeddings API updated https://github.com/ggerganov/llama.cpp/pull/5796
25
- - [2024 Mar 3] `struct llama_context_params` https://github.com/ggerganov/llama.cpp/pull/5849
26
 
27
- ## Hot topics
28
 
29
- - **`convert.py` has been deprecated and moved to `examples/convert_legacy_llama.py`, please use `convert_hf_to_gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430
30
- - Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021
31
- - BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920
32
- - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387
33
- - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404
34
- - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225
35
- - Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017
36
- - Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981
37
- - Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962
38
- - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328
39
 
40
- ----
41
 
42
- ## Description
 
 
 
 
 
 
43
 
44
- The main goal of `llama.cpp` is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
45
- variety of hardware - locally and in the cloud.
46
 
47
- - Plain C/C++ implementation without any dependencies
48
- - Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
49
- - AVX, AVX2 and AVX512 support for x86 architectures
50
- - 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
51
- - Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP)
52
- - Vulkan and SYCL backend support
53
- - CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
54
 
55
- Since its [inception](https://github.com/ggerganov/llama.cpp/issues/33#issuecomment-1465108022), the project has
56
- improved significantly thanks to many contributions. It is the main playground for developing new features for the
57
- [ggml](https://github.com/ggerganov/ggml) library.
58
 
59
- **Supported models:**
60
 
61
- Typically finetunes of the base models below are supported as well.
 
 
62
 
63
- - [X] LLaMA 🦙
64
- - [x] LLaMA 2 🦙🦙
65
- - [x] LLaMA 3 🦙🦙🦙
66
- - [X] [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)
67
- - [x] [Mixtral MoE](https://huggingface.co/models?search=mistral-ai/Mixtral)
68
- - [x] [DBRX](https://huggingface.co/databricks/dbrx-instruct)
69
- - [X] [Falcon](https://huggingface.co/models?search=tiiuae/falcon)
70
- - [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) and [Chinese LLaMA-2 / Alpaca-2](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2)
71
- - [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne)
72
- - [X] [BERT](https://github.com/ggerganov/llama.cpp/pull/5423)
73
- - [X] [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/)
74
- - [X] [Baichuan 1 & 2](https://huggingface.co/models?search=baichuan-inc/Baichuan) + [derivations](https://huggingface.co/hiyouga/baichuan-7b-sft)
75
- - [X] [Aquila 1 & 2](https://huggingface.co/models?search=BAAI/Aquila)
76
- - [X] [Starcoder models](https://github.com/ggerganov/llama.cpp/pull/3187)
77
- - [X] [Refact](https://huggingface.co/smallcloudai/Refact-1_6B-fim)
78
- - [X] [MPT](https://github.com/ggerganov/llama.cpp/pull/3417)
79
- - [X] [Bloom](https://github.com/ggerganov/llama.cpp/pull/3553)
80
- - [x] [Yi models](https://huggingface.co/models?search=01-ai/Yi)
81
- - [X] [StableLM models](https://huggingface.co/stabilityai)
82
- - [x] [Deepseek models](https://huggingface.co/models?search=deepseek-ai/deepseek)
83
- - [x] [Qwen models](https://huggingface.co/models?search=Qwen/Qwen)
84
- - [x] [PLaMo-13B](https://github.com/ggerganov/llama.cpp/pull/3557)
85
- - [x] [Phi models](https://huggingface.co/models?search=microsoft/phi)
86
- - [x] [GPT-2](https://huggingface.co/gpt2)
87
- - [x] [Orion 14B](https://github.com/ggerganov/llama.cpp/pull/5118)
88
- - [x] [InternLM2](https://huggingface.co/models?search=internlm2)
89
- - [x] [CodeShell](https://github.com/WisdomShell/codeshell)
90
- - [x] [Gemma](https://ai.google.dev/gemma)
91
- - [x] [Mamba](https://github.com/state-spaces/mamba)
92
- - [x] [Grok-1](https://huggingface.co/keyfan/grok-1-hf)
93
- - [x] [Xverse](https://huggingface.co/models?search=xverse)
94
- - [x] [Command-R models](https://huggingface.co/models?search=CohereForAI/c4ai-command-r)
95
- - [x] [SEA-LION](https://huggingface.co/models?search=sea-lion)
96
- - [x] [GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) + [GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B)
97
- - [x] [OLMo](https://allenai.org/olmo)
98
- - [x] [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) + [Pythia](https://github.com/EleutherAI/pythia)
99
- - [x] [ChatGLM3-6b](https://huggingface.co/THUDM/chatglm3-6b) + [ChatGLM4-9b](https://huggingface.co/THUDM/glm-4-9b)
100
 
101
- (instructions for supporting more models: [HOWTO-add-model.md](./docs/development/HOWTO-add-model.md))
102
 
103
- **Multimodal models:**
104
-
105
- - [x] [LLaVA 1.5 models](https://huggingface.co/collections/liuhaotian/llava-15-653aac15d994e992e2677a7e), [LLaVA 1.6 models](https://huggingface.co/collections/liuhaotian/llava-16-65b9e40155f60fd046a5ccf2)
106
- - [x] [BakLLaVA](https://huggingface.co/models?search=SkunkworksAI/Bakllava)
107
- - [x] [Obsidian](https://huggingface.co/NousResearch/Obsidian-3B-V0.5)
108
- - [x] [ShareGPT4V](https://huggingface.co/models?search=Lin-Chen/ShareGPT4V)
109
- - [x] [MobileVLM 1.7B/3B models](https://huggingface.co/models?search=mobileVLM)
110
- - [x] [Yi-VL](https://huggingface.co/models?search=Yi-VL)
111
- - [x] [Mini CPM](https://huggingface.co/models?search=MiniCPM)
112
- - [x] [Moondream](https://huggingface.co/vikhyatk/moondream2)
113
- - [x] [Bunny](https://github.com/BAAI-DCAI/Bunny)
114
-
115
- **Bindings:**
116
-
117
- - Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
118
- - Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
119
- - Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)
120
- - JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp)
121
- - JavaScript/Wasm (works in browser): [tangledgroup/llama-cpp-wasm](https://github.com/tangledgroup/llama-cpp-wasm)
122
- - Typescript/Wasm (nicer API, available on npm): [ngxson/wllama](https://github.com/ngxson/wllama)
123
- - Ruby: [yoshoku/llama_cpp.rb](https://github.com/yoshoku/llama_cpp.rb)
124
- - Rust (more features): [edgenai/llama_cpp-rs](https://github.com/edgenai/llama_cpp-rs)
125
- - Rust (nicer API): [mdrokz/rust-llama.cpp](https://github.com/mdrokz/rust-llama.cpp)
126
- - Rust (more direct bindings): [utilityai/llama-cpp-rs](https://github.com/utilityai/llama-cpp-rs)
127
- - C#/.NET: [SciSharp/LLamaSharp](https://github.com/SciSharp/LLamaSharp)
128
- - Scala 3: [donderom/llm4s](https://github.com/donderom/llm4s)
129
- - Clojure: [phronmophobic/llama.clj](https://github.com/phronmophobic/llama.clj)
130
- - React Native: [mybigday/llama.rn](https://github.com/mybigday/llama.rn)
131
- - Java: [kherud/java-llama.cpp](https://github.com/kherud/java-llama.cpp)
132
- - Zig: [deins/llama.cpp.zig](https://github.com/Deins/llama.cpp.zig)
133
- - Flutter/Dart: [netdur/llama_cpp_dart](https://github.com/netdur/llama_cpp_dart)
134
- - PHP (API bindings and features built on top of llama.cpp): [distantmagic/resonance](https://github.com/distantmagic/resonance) [(more info)](https://github.com/ggerganov/llama.cpp/pull/6326)
135
- - Guile Scheme: [guile_llama_cpp](https://savannah.nongnu.org/projects/guile-llama-cpp)
136
-
137
- **UI:**
138
-
139
- Unless otherwise noted these projects are open-source with permissive licensing:
140
-
141
- - [iohub/collama](https://github.com/iohub/coLLaMA)
142
- - [janhq/jan](https://github.com/janhq/jan) (AGPL)
143
- - [nat/openplayground](https://github.com/nat/openplayground)
144
- - [Faraday](https://faraday.dev/) (proprietary)
145
- - [LMStudio](https://lmstudio.ai/) (proprietary)
146
- - [Layla](https://play.google.com/store/apps/details?id=com.laylalite) (proprietary)
147
- - [LocalAI](https://github.com/mudler/LocalAI) (MIT)
148
- - [LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp) (AGPL)
149
- - [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile)
150
- - [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all)
151
- - [ollama/ollama](https://github.com/ollama/ollama)
152
- - [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (AGPL)
153
- - [psugihara/FreeChat](https://github.com/psugihara/FreeChat)
154
- - [cztomsik/ava](https://github.com/cztomsik/ava) (MIT)
155
- - [ptsochantaris/emeltal](https://github.com/ptsochantaris/emeltal)
156
- - [pythops/tenere](https://github.com/pythops/tenere) (AGPL)
157
- - [RAGNA Desktop](https://ragna.app/) (proprietary)
158
- - [RecurseChat](https://recurse.chat/) (proprietary)
159
- - [semperai/amica](https://github.com/semperai/amica)
160
- - [withcatai/catai](https://github.com/withcatai/catai)
161
- - [Mobile-Artificial-Intelligence/maid](https://github.com/Mobile-Artificial-Intelligence/maid) (MIT)
162
- - [Msty](https://msty.app) (proprietary)
163
- - [LLMFarm](https://github.com/guinmoon/LLMFarm?tab=readme-ov-file) (MIT)
164
- - [KanTV](https://github.com/zhouwg/kantv?tab=readme-ov-file)(Apachev2.0 or later)
165
- - [Dot](https://github.com/alexpinel/Dot) (GPL)
166
- - [MindMac](https://mindmac.app) (proprietary)
167
- - [KodiBot](https://github.com/firatkiral/kodibot) (GPL)
168
- - [eva](https://github.com/ylsdamxssjxxdd/eva) (MIT)
169
- - [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
170
- - [AIKit](https://github.com/sozercan/aikit) (MIT)
171
- - [LARS - The LLM & Advanced Referencing Solution](https://github.com/abgulati/LARS) (AGPL)
172
-
173
- *(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
174
-
175
- **Tools:**
176
-
177
- - [akx/ggify](https://github.com/akx/ggify) – download PyTorch models from HuggingFace Hub and convert them to GGML
178
- - [crashr/gppm](https://github.com/crashr/gppm) – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
179
-
180
- **Infrastructure:**
181
-
182
- - [Paddler](https://github.com/distantmagic/paddler) - Stateful load balancer custom-tailored for llama.cpp
183
-
184
- ## Demo
185
-
186
- <details>
187
- <summary>Typical run using LLaMA v2 13B on M2 Ultra</summary>
188
-
189
- ```
190
- $ make -j && ./llama-cli -m models/llama-13b-v2/ggml-model-q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e
191
- I llama.cpp build info:
192
- I UNAME_S: Darwin
193
- I UNAME_P: arm
194
- I UNAME_M: arm64
195
- I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -DGGML_USE_K_QUANTS -DGGML_USE_ACCELERATE
196
- I CXXFLAGS: -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -DGGML_USE_K_QUANTS
197
- I LDFLAGS: -framework Accelerate
198
- I CC: Apple clang version 14.0.3 (clang-1403.0.22.14.1)
199
- I CXX: Apple clang version 14.0.3 (clang-1403.0.22.14.1)
200
-
201
- make: Nothing to be done for `default'.
202
- main: build = 1041 (cf658ad)
203
- main: seed = 1692823051
204
- llama_model_loader: loaded meta data with 16 key-value pairs and 363 tensors from models/llama-13b-v2/ggml-model-q4_0.gguf (version GGUF V1 (latest))
205
- llama_model_loader: - type f32: 81 tensors
206
- llama_model_loader: - type q4_0: 281 tensors
207
- llama_model_loader: - type q6_K: 1 tensors
208
- llm_load_print_meta: format = GGUF V1 (latest)
209
- llm_load_print_meta: arch = llama
210
- llm_load_print_meta: vocab type = SPM
211
- llm_load_print_meta: n_vocab = 32000
212
- llm_load_print_meta: n_merges = 0
213
- llm_load_print_meta: n_ctx_train = 4096
214
- llm_load_print_meta: n_ctx = 512
215
- llm_load_print_meta: n_embd = 5120
216
- llm_load_print_meta: n_head = 40
217
- llm_load_print_meta: n_head_kv = 40
218
- llm_load_print_meta: n_layer = 40
219
- llm_load_print_meta: n_rot = 128
220
- llm_load_print_meta: n_gqa = 1
221
- llm_load_print_meta: f_norm_eps = 1.0e-05
222
- llm_load_print_meta: f_norm_rms_eps = 1.0e-05
223
- llm_load_print_meta: n_ff = 13824
224
- llm_load_print_meta: freq_base = 10000.0
225
- llm_load_print_meta: freq_scale = 1
226
- llm_load_print_meta: model type = 13B
227
- llm_load_print_meta: model ftype = mostly Q4_0
228
- llm_load_print_meta: model size = 13.02 B
229
- llm_load_print_meta: general.name = LLaMA v2
230
- llm_load_print_meta: BOS token = 1 '<s>'
231
- llm_load_print_meta: EOS token = 2 '</s>'
232
- llm_load_print_meta: UNK token = 0 '<unk>'
233
- llm_load_print_meta: LF token = 13 '<0x0A>'
234
- llm_load_tensors: ggml ctx size = 0.11 MB
235
- llm_load_tensors: mem required = 7024.01 MB (+ 400.00 MB per state)
236
- ...................................................................................................
237
- llama_new_context_with_model: kv self size = 400.00 MB
238
- llama_new_context_with_model: compute buffer total size = 75.41 MB
239
-
240
- system_info: n_threads = 16 / 24 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
241
- sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
242
- generate: n_ctx = 512, n_batch = 512, n_predict = 400, n_keep = 0
243
-
244
-
245
- Building a website can be done in 10 simple steps:
246
- Step 1: Find the right website platform.
247
- Step 2: Choose your domain name and hosting plan.
248
- Step 3: Design your website layout.
249
- Step 4: Write your website content and add images.
250
- Step 5: Install security features to protect your site from hackers or spammers
251
- Step 6: Test your website on multiple browsers, mobile devices, operating systems etc…
252
- Step 7: Test it again with people who are not related to you personally – friends or family members will work just fine!
253
- Step 8: Start marketing and promoting the website via social media channels or paid ads
254
- Step 9: Analyze how many visitors have come to your site so far, what type of people visit more often than others (e.g., men vs women) etc…
255
- Step 10: Continue to improve upon all aspects mentioned above by following trends in web design and staying up-to-date on new technologies that can enhance user experience even further!
256
- How does a Website Work?
257
- A website works by having pages, which are made of HTML code. This code tells your computer how to display the content on each page you visit – whether it’s an image or text file (like PDFs). In order for someone else’s browser not only be able but also want those same results when accessing any given URL; some additional steps need taken by way of programming scripts that will add functionality such as making links clickable!
258
- The most common type is called static HTML pages because they remain unchanged over time unless modified manually (either through editing files directly or using an interface such as WordPress). They are usually served up via HTTP protocols – this means anyone can access them without having any special privileges like being part of a group who is allowed into restricted areas online; however, there may still exist some limitations depending upon where one lives geographically speaking.
259
- How to
260
- llama_print_timings: load time = 576.45 ms
261
- llama_print_timings: sample time = 283.10 ms / 400 runs ( 0.71 ms per token, 1412.91 tokens per second)
262
- llama_print_timings: prompt eval time = 599.83 ms / 19 tokens ( 31.57 ms per token, 31.68 tokens per second)
263
- llama_print_timings: eval time = 24513.59 ms / 399 runs ( 61.44 ms per token, 16.28 tokens per second)
264
- llama_print_timings: total time = 25431.49 ms
265
- ```
266
-
267
- </details>
268
-
269
- <details>
270
- <summary>Demo of running both LLaMA-7B and whisper.cpp on a single M1 Pro MacBook</summary>
271
-
272
- And here is another demo of running both LLaMA-7B and [whisper.cpp](https://github.com/ggerganov/whisper.cpp) on a single M1 Pro MacBook:
273
-
274
- https://user-images.githubusercontent.com/1991296/224442907-7693d4be-acaa-4e01-8b4f-add84093ffff.mp4
275
-
276
- </details>
277
-
278
- ## Usage
279
-
280
- Here are the end-to-end binary build and model conversion steps for most supported models.
281
-
282
- ### Basic usage
283
-
284
- Firstly, you need to get the binary. There are different methods that you can follow:
285
- - Method 1: Clone this repository and build locally, see [how to build](./docs/build.md)
286
- - Method 2: If you are using MacOS or Linux, you can install llama.cpp via [brew, flox or nix](./docs/install.md)
287
- - Method 3: Use a Docker image, see [documentation for Docker](./docs/docker.md)
288
- - Method 4: Download pre-built binary from [releases](https://github.com/ggerganov/llama.cpp/releases)
289
-
290
- You can run a basic completion using this command:
291
-
292
- ```bash
293
- llama-cli -m your_model.gguf -p "I believe the meaning of life is" -n 128
294
-
295
- # Output:
296
- # I believe the meaning of life is to find your own truth and to live in accordance with it. For me, this means being true to myself and following my passions, even if they don't align with societal expectations. I think that's what I love about yoga – it's not just a physical practice, but a spiritual one too. It's about connecting with yourself, listening to your inner voice, and honoring your own unique journey.
297
- ```
298
-
299
- See [this page](./examples/main/README.md) for a full list of parameters.
300
-
301
- ### Conversation mode
302
-
303
- If you want a more ChatGPT-like experience, you can run in conversation mode by passing `-cnv` as a parameter:
304
-
305
- ```bash
306
- llama-cli -m your_model.gguf -p "You are a helpful assistant" -cnv
307
-
308
- # Output:
309
- # > hi, who are you?
310
- # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
311
- #
312
- # > what is 1+1?
313
- # Easy peasy! The answer to 1+1 is... 2!
314
- ```
315
-
316
- By default, the chat template will be taken from the input model. If you want to use another chat template, pass `--chat-template NAME` as a parameter. See the list of [supported templates](https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template)
317
-
318
- ```bash
319
- ./llama-cli -m your_model.gguf -p "You are a helpful assistant" -cnv --chat-template chatml
320
- ```
321
-
322
- You can also use your own template via in-prefix, in-suffix and reverse-prompt parameters:
323
-
324
- ```bash
325
- ./llama-cli -m your_model.gguf -p "You are a helpful assistant" -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
326
- ```
327
-
328
- ### Web server
329
-
330
- [llama.cpp web server](./examples/server/README.md) is a lightweight [OpenAI API](https://github.com/openai/openai-openapi) compatible HTTP server that can be used to serve local models and easily connect them to existing clients.
331
-
332
- Example usage:
333
-
334
- ```bash
335
- ./llama-server -m your_model.gguf --port 8080
336
-
337
- # Basic web UI can be accessed via browser: http://localhost:8080
338
- # Chat completion endpoint: http://localhost:8080/v1/chat/completions
339
- ```
340
-
341
- ### Interactive mode
342
-
343
- > [!NOTE]
344
- > If you prefer basic usage, please consider using conversation mode instead of interactive mode
345
-
346
- In this mode, you can always interrupt generation by pressing Ctrl+C and entering one or more lines of text, which will be converted into tokens and appended to the current context. You can also specify a *reverse prompt* with the parameter `-r "reverse prompt string"`. This will result in user input being prompted whenever the exact tokens of the reverse prompt string are encountered in the generation. A typical use is to use a prompt that makes LLaMA emulate a chat between multiple users, say Alice and Bob, and pass `-r "Alice:"`.
347
-
348
- Here is an example of a few-shot interaction, invoked with the command
349
-
350
- ```bash
351
- # default arguments using a 7B model
352
- ./examples/chat.sh
353
-
354
- # advanced chat with a 13B model
355
- ./examples/chat-13B.sh
356
-
357
- # custom arguments using a 13B model
358
- ./llama-cli -m ./models/13B/ggml-model-q4_0.gguf -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
359
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
360
 
361
- Note the use of `--color` to distinguish between user input and generated text. Other parameters are explained in more detail in the [README](examples/main/README.md) for the `llama-cli` example program.
362
-
363
- ![image](https://user-images.githubusercontent.com/1991296/224575029-2af3c7dc-5a65-4f64-a6bb-517a532aea38.png)
364
-
365
- ### Persistent Interaction
366
-
367
- The prompt, user inputs, and model generations can be saved and resumed across calls to `./llama-cli` by leveraging `--prompt-cache` and `--prompt-cache-all`. The `./examples/chat-persistent.sh` script demonstrates this with support for long-running, resumable chat sessions. To use this example, you must provide a file to cache the initial chat prompt and a directory to save the chat session, and may optionally provide the same variables as `chat-13B.sh`. The same prompt cache can be reused for new chat sessions. Note that both prompt cache and chat directory are tied to the initial prompt (`PROMPT_TEMPLATE`) and the model file.
368
-
369
- ```bash
370
- # Start a new chat
371
- PROMPT_CACHE_FILE=chat.prompt.bin CHAT_SAVE_DIR=./chat/default ./examples/chat-persistent.sh
372
-
373
- # Resume that chat
374
- PROMPT_CACHE_FILE=chat.prompt.bin CHAT_SAVE_DIR=./chat/default ./examples/chat-persistent.sh
375
-
376
- # Start a different chat with the same prompt/model
377
- PROMPT_CACHE_FILE=chat.prompt.bin CHAT_SAVE_DIR=./chat/another ./examples/chat-persistent.sh
378
-
379
- # Different prompt cache for different prompt/model
380
- PROMPT_TEMPLATE=./prompts/chat-with-bob.txt PROMPT_CACHE_FILE=bob.prompt.bin \
381
- CHAT_SAVE_DIR=./chat/bob ./examples/chat-persistent.sh
382
  ```
383
 
384
- ### Constrained output with grammars
385
-
386
- `llama.cpp` supports grammars to constrain model output. For example, you can force the model to output JSON only:
387
-
388
- ```bash
389
- ./llama-cli -m ./models/13B/ggml-model-q4_0.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
390
- ```
391
-
392
- The `grammars/` folder contains a handful of sample grammars. To write your own, check out the [GBNF Guide](./grammars/README.md).
393
-
394
- For authoring more complex JSON grammars, you can also check out https://grammar.intrinsiclabs.ai/, a browser app that lets you write TypeScript interfaces which it compiles to GBNF grammars that you can save for local use. Note that the app is built and maintained by members of the community, please file any issues or FRs on [its repo](http://github.com/intrinsiclabsai/gbnfgen) and not this one.
395
-
396
- ## Build
397
-
398
- Please refer to [Build llama.cpp locally](./docs/build.md)
399
-
400
- ## Supported backends
401
-
402
- | Backend | Target devices |
403
- | --- | --- |
404
- | [Metal](./docs/build.md#metal-build) | Apple Silicon |
405
- | [BLAS](./docs/build.md#blas-build) | All |
406
- | [BLIS](./docs/backend/BLIS.md) | All |
407
- | [SYCL](./docs/backend/SYCL.md) | Intel and Nvidia GPU |
408
- | [CUDA](./docs/build.md#cuda) | Nvidia GPU |
409
- | [hipBLAS](./docs/build.md#hipblas) | AMD GPU |
410
- | [Vulkan](./docs/build.md#vulkan) | GPU |
411
-
412
- ## Tools
413
-
414
- ### Prepare and Quantize
415
-
416
- > [!NOTE]
417
- > You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. It is synced from `llama.cpp` main every 6 hours.
418
-
419
- To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
420
-
421
- Note: `convert.py` has been moved to `examples/convert_legacy_llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives.
422
- It does not support LLaMA 3, you can use `convert_hf_to_gguf.py` with LLaMA 3 downloaded from Hugging Face.
423
-
424
- To learn more about quantizing model, [read this documentation](./examples/quantize/README.md)
425
-
426
- ### Perplexity (measuring model quality)
427
-
428
- You can use the `perplexity` example to measure perplexity over a given prompt (lower perplexity is better).
429
- For more information, see [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity).
430
-
431
- To learn more how to measure perplexity using llama.cpp, [read this documentation](./examples/perplexity/README.md)
432
-
433
- ## Contributing
434
-
435
- - Contributors can open PRs
436
- - Collaborators can push to branches in the `llama.cpp` repo and merge PRs into the `master` branch
437
- - Collaborators will be invited based on contributions
438
- - Any help with managing issues and PRs is very appreciated!
439
- - See [good first issues](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
440
- - Read the [CONTRIBUTING.md](CONTRIBUTING.md) for more information
441
- - Make sure to read this: [Inference at the edge](https://github.com/ggerganov/llama.cpp/discussions/205)
442
- - A bit of backstory for those who are interested: [Changelog podcast](https://changelog.com/podcast/532)
443
-
444
- ## Other documentations
445
 
446
- - [main (cli)](./examples/main/README.md)
447
- - [server](./examples/server/README.md)
448
- - [jeopardy](./examples/jeopardy/README.md)
449
- - [GBNF grammars](./grammars/README.md)
450
 
451
- **Development documentations**
452
 
453
- - [How to build](./docs/build.md)
454
- - [Running on Docker](./docs/docker.md)
455
- - [Build on Android](./docs/android.md)
456
- - [Performance troubleshooting](./docs/development/token_generation_performance_tips.md)
457
- - [GGML tips & tricks](https://github.com/ggerganov/llama.cpp/wiki/GGML-Tips-&-Tricks)
458
 
459
- **Seminal papers and background on the models**
460
 
461
- If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
462
- - LLaMA:
463
- - [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
464
- - [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
465
- - GPT-3
466
- - [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
467
- - GPT-3.5 / InstructGPT / ChatGPT:
468
- - [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
469
- - [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
 
 
1
 
2
+ ---
3
 
4
+ language:
5
+ - en
6
+ license: llama3
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - facebook
10
+ - meta
11
+ - pytorch
12
+ - llama
13
+ - llama-3
14
+ - groq
15
+ - tool-use
16
+ - function-calling
17
 
18
+ ---
19
 
20
+ ![](https://lh7-us.googleusercontent.com/docsz/AD_4nXfrlKyH6elkxeyrKw4el9j8V3IOQLsqTVngg19Akt6se1Eq2xaocCEjOmc1w8mq5ENHeYfpzRWjYB8D4mtmMPsiH7QyX_Ii1kEM7bk8eMzO68y9JEuDcoJxJBgbNDzRbTdVXylN9_zjrEposDwsoN7csKiD?key=xt3VSDoCbmTY7o-cwwOFwQ)
21
 
22
+ # QuantFactory/Llama-3-Groq-8B-Tool-Use-GGUF
23
+ This is quantized version of [Groq/Llama-3-Groq-8B-Tool-Use](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use) created using llama.cpp
24
 
25
+ # Original Model Card
26
 
 
 
 
 
 
 
 
 
27
 
28
+ # Llama-3-Groq-8B-Tool-Use
29
 
30
+ This is the 8B parameter version of the Llama 3 Groq Tool Use model, specifically designed for advanced tool use and function calling tasks.
 
 
 
 
 
 
 
 
 
31
 
32
+ ## Model Details
33
 
34
+ - **Model Type:** Causal language model fine-tuned for tool use
35
+ - **Language(s):** English
36
+ - **License:** Meta Llama 3 Community License
37
+ - **Model Architecture:** Optimized transformer
38
+ - **Training Approach:** Full fine-tuning and Direct Preference Optimization (DPO) on Llama 3 8B base model
39
+ - **Input:** Text
40
+ - **Output:** Text, with enhanced capabilities for tool use and function calling
41
 
42
+ ## Performance
 
43
 
44
+ - **Berkeley Function Calling Leaderboard (BFCL) Score:** 89.06% overall accuracy
45
+ - This score represents the best performance among all open-source 8B LLMs on the BFCL
 
 
 
 
 
46
 
47
+ ## Usage and Limitations
 
 
48
 
49
+ This model is designed for research and development in tool use and function calling scenarios. It excels at tasks involving API interactions, structured data manipulation, and complex tool use. However, users should note:
50
 
51
+ - For general knowledge or open-ended tasks, a general-purpose language model may be more suitable
52
+ - The model may still produce inaccurate or biased content in some cases
53
+ - Users are responsible for implementing appropriate safety measures for their specific use case
54
 
55
+ Note the model is quite sensitive to the `temperature` and `top_p` sampling configuration. Start at `temperature=0.5, top_p=0.65` and move up or down as needed.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
+ Text prompt example:
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ```
60
+ <|start_header_id|>system<|end_header_id|>
61
+
62
+ You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
63
+ <tool_call>
64
+ {"name": <function-name>,"arguments": <args-dict>}
65
+ </tool_call>
66
+
67
+ Here are the available tools:
68
+ <tools> {
69
+ "name": "get_current_weather",
70
+ "description": "Get the current weather in a given location",
71
+ "parameters": {
72
+ "properties": {
73
+ "location": {
74
+ "description": "The city and state, e.g. San Francisco, CA",
75
+ "type": "string"
76
+ },
77
+ "unit": {
78
+ "enum": [
79
+ "celsius",
80
+ "fahrenheit"
81
+ ],
82
+ "type": "string"
83
+ }
84
+ },
85
+ "required": [
86
+ "location"
87
+ ],
88
+ "type": "object"
89
+ }
90
+ } </tools><|eot_id|><|start_header_id|>user<|end_header_id|>
91
+
92
+ What is the weather like in San Francisco?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
93
+
94
+ <tool_call>
95
+ {"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
96
+ </tool_call><|eot_id|><|start_header_id|>tool<|end_header_id|>
97
+
98
+ <tool_response>
99
+ {"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
100
+ </tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>
101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  ```
103
 
104
+ ## Ethical Considerations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
 
106
+ While fine-tuned for tool use, this model inherits the ethical considerations of the base Llama 3 model. Use responsibly and implement additional safeguards as needed for your application.
 
 
 
107
 
108
+ ## Availability
109
 
110
+ The model is available through:
111
+ - [Groq API console](https://console.groq.com)
112
+ - [Hugging Face](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use)
 
 
113
 
114
+ For full details on responsible use, ethical considerations, and latest benchmarks, please refer to the [official Llama 3 documentation](https://llama.meta.com/) and the Groq model card.
115