File size: 1,069 Bytes
57ffeb1
 
 
 
 
 
 
 
d43e732
57ffeb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
tags:
- LLaMA
- GGML
---

# LLAMA-GGML-v2

This is GGML format quantised 4bit models of LLaMA models for the latest GGML format v2.

This repo is the result of quantising to 4bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).

## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!

llama.cpp recently made a breaking change to its quantisation methods.

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.

## How to run in `text-generation-webui`

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).

Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.