LLaMA-GGML-v2 / README.md
gotzmann's picture
Update README.md
caf25ad
|
raw
history blame
974 Bytes
metadata
tags:
  - LLaMA
  - GGML

LLAMA-GGML-v2

This is repo for LLaMA models quantised down to 4bit for the latest llama.cpp GGML v2 format.

THE FILES REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!

llama.cpp recently made a breaking change to its quantisation methods.

I have quantised the GGML files in this repo with the latest version.

Therefore you will require llama.cpp compiled on May 12th or later (commit b9fd7ee or later) to use them.

How to run in text-generation-webui

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.