gotzmann commited on
Commit
caf25ad
1 Parent(s): e5f7430

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -6,15 +6,15 @@ tags:
6
 
7
  # LLAMA-GGML-v2
8
 
9
- This is GGML format quantised 4bit models of LLaMA models for the latest GGML format v2.
10
 
11
- This repo is the result of quantising to 4bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
12
-
13
- ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
14
 
15
  llama.cpp recently made a breaking change to its quantisation methods.
16
 
17
- I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
 
 
18
 
19
  ## How to run in `text-generation-webui`
20
 
 
6
 
7
  # LLAMA-GGML-v2
8
 
9
+ This is repo for LLaMA models quantised down to 4bit for the latest [llama.cpp](https://github.com/ggerganov/llama.cpp) GGML v2 format.
10
 
11
+ ## THE FILES REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
 
 
12
 
13
  llama.cpp recently made a breaking change to its quantisation methods.
14
 
15
+ I have quantised the GGML files in this repo with the latest version.
16
+
17
+ Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
18
 
19
  ## How to run in `text-generation-webui`
20