TheBloke commited on
Commit
54e0c6d
1 Parent(s): 52730b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -25
README.md CHANGED
@@ -18,41 +18,27 @@ It was created by merging the LoRA provided in the above repo with the original
18
 
19
  The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
 
21
- ## Provided files
22
- | Name | Quant method | Bits | Size | RAM required | Use case |
23
- | ---- | ---- | ---- | ---- | ---- | ----- |
24
- `gpt4-alpaca-lora-30B.GGML.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | Maximum compatibility |
25
- `gpt4-alpaca-lora-30B.GGML.q4_2.bin` | q4_2 | 4bit | 20.3GB | 23GB | Best compromise between resources, speed and quality |
26
- `gpt4-alpaca-lora-30B.GGML.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
27
- `gpt4-alpaca-lora-30B.GGML.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
28
-
29
- * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
30
- * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
31
- * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
32
- * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
33
-
34
- ## q4_2 compatibility
35
-
36
- q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
37
 
38
- In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
39
 
40
- If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
41
 
42
- If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
43
 
44
- ## q5_0 and q5_1 compatibility
45
-
46
- These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
47
-
48
- Third-party UIs/tools may not support this yet.
 
49
 
50
  ## How to run in `llama.cpp`
51
 
52
  I use the following command line; adjust for your tastes and needs:
53
 
54
  ```
55
- ./main -t 18 -m gpt4-alpaca-lora-30B.GGML.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
56
  ### Instruction:
57
  Write a story about llamas
58
  ### Response:"
@@ -67,6 +53,8 @@ Create a model directory that has `ggml` (case sensitive) in its name. Then put
67
 
68
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
69
 
 
 
70
  # Original GPT4 Alpaca Lora model card
71
 
72
  This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.
 
18
 
19
  The files in this repo were then quantized to 4bit and 5bit for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
20
 
21
+ ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ llama.cpp recently made a breaking change to its quantisation methods.
24
 
25
+ I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
26
 
27
+ The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
28
 
29
+ ## Provided files
30
+ | Name | Quant method | Bits | Size | RAM required | Use case |
31
+ | ---- | ---- | ---- | ---- | ---- | ----- |
32
+ `gpt4-alpaca-lora-30B.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4bit. |
33
+ `gpt4-alpaca-lora-30B.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5bit. Higher accuracy, higher resource usage, slower inference. |
34
+ `gpt4-alpaca-lora-30B.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5bit. Even higher accuracy and resource usage, and slower inference. |
35
 
36
  ## How to run in `llama.cpp`
37
 
38
  I use the following command line; adjust for your tastes and needs:
39
 
40
  ```
41
+ ./main -t 18 -m gpt4-alpaca-lora-30B.GGML.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
42
  ### Instruction:
43
  Write a story about llamas
44
  ### Response:"
 
53
 
54
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
55
 
56
+ Note that as of May 12th, text-gen-ui likely won't support the newly updated GGML models until it's been updated.
57
+
58
  # Original GPT4 Alpaca Lora model card
59
 
60
  This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.