gotzmann commited on
Commit
57ffeb1
1 Parent(s): 2242b78

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - LLaMA
4
+ - GGML
5
+ ---
6
+
7
+ # LLAMA-GGML-v2
8
+
9
+ This is GGML format quantised 4bit models of LLaMA models.
10
+
11
+ This repo is the result of quantising to 4bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
12
+
13
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
14
+
15
+ llama.cpp recently made a breaking change to its quantisation methods.
16
+
17
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
18
+
19
+ ## How to run in `text-generation-webui`
20
+
21
+ GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
22
+
23
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
24
+
25
+ Note: at this time text-generation-webui may not support the new May 12th llama.cpp quantisation methods.