LLaMA-GGML-v2 / README.md
gotzmann's picture
Update README.md
d43e732
|
raw
history blame
1.07 kB
metadata
tags:
  - LLaMA
  - GGML

LLAMA-GGML-v2

This is GGML format quantised 4bit models of LLaMA models for the latest GGML format v2.

This repo is the result of quantising to 4bit GGML for CPU inference using llama.cpp.

THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!

llama.cpp recently made a breaking change to its quantisation methods.

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit b9fd7ee or later) to use them.

How to run in text-generation-webui

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.