--- language: - en license: llama2 tags: - meta - llama-2 - wasmedge - second-state - llama.cpp model_name: Llama 2 GGUF inference: false model_creator: Meta Llama 2 model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don''t know the answer to a question, please don''t share false information. <> {prompt}[/INST] ' quantized_by: wasmedge --- This repo contains GGUF model files for cross-platform AI inference using the [WasmEdge Runtime](https://github.com/WasmEdge/WasmEdge). [Learn more](https://medium.com/stackademic/fast-and-portable-llama2-inference-on-the-heterogeneous-edge-a62508e82359) on why and how. ## Prerequisite Install WasmEdge with the GGML plugin. ``` curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugin wasi_nn-ggml ``` Download the cross-platform Wasm apps for inference. ``` curl -LO https://github.com/second-state/llama-utils/raw/main/simple/llama-simple.wasm curl -LO https://github.com/second-state/llama-utils/raw/main/chat/llama-chat.wasm ``` ## Use the quantized models The `q5_k_m` version is a quantized version of the llama2 models. They are only half of the size of the original models, and hence consume half as much VRAM, but still give high-quality inference results. Chat with the 7b chat model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-q5_k_m.gguf llama-chat.wasm ``` Generate text with the 7b base model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-q5_k_m.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is ' ``` Chat with the 13b chat model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-q5_k_m.gguf llama-chat.wasm ``` Generate text with the 13b base model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-q5_k_m.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is ' ``` ## Use the f16 models The f16 version is the GGUF equivalent of the original llama2 models. It gives the best quality inference results but also consumes the most computing resources in both VRAM and computing time. The f16 models are also great as a basis for fine-tuning. Chat with the 7b chat model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-f16.gguf llama-chat.wasm ``` Generate text with the 7b base model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-f16.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is ' ``` Chat with the 13b chat model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-f16.gguf llama-chat.wasm ``` Generate text with the 13b base model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-f16.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is ' ``` ## Resource constrained models The `q2_k` version is the smallest quantized version of the llama2 models. They can run on devices with only 4GB of RAM, but the inference quality is rather low. Chat with the 7b chat model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-q2_k.gguf llama-chat.wasm ``` Generate text with the 7b base model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-q2_k.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is ' ``` Chat with the 13b chat model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-q2_k.gguf llama-chat.wasm ``` Generate text with the 13b base model ``` wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-q2_k.gguf llama-simple.wasm 'Robert Oppenheimer most important achievement is ' ```