TheBloke commited on
Commit
2b6487b
1 Parent(s): 16beee9

New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48

Browse files
Files changed (1) hide show
  1. README.md +12 -7
README.md CHANGED
@@ -17,21 +17,26 @@ This repo contains 4bit and 5bit quantised GGML files for CPU inference using [l
17
  * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GGML)
18
  * [float16 unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)
19
 
20
- ## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
21
 
22
- llama.cpp recently made a breaking change to its quantisation methods.
23
 
24
- These GGML files were quantised with this the latest llama.cpp code. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
 
 
25
 
26
  ## Provided files
27
  | Name | Quant method | Bits | Size | RAM required | Use case |
28
  | ---- | ---- | ---- | ---- | ---- | ----- |
29
- `gpt4-alpaca-lora_mlp-65B.ggml.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4-bit. |
30
- `gpt4-alpaca-lora_mlp-65B.ggml.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
31
- `gpt4-alpaca-lora_mlp-65B.ggml.q5_1.bin` | q5_1 | 5bit | 49.0GB | 51GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. |
 
32
 
33
  Note: no q8_0 will be provided as HF won't allow uploading of files larger than 50GB :)
34
 
 
 
35
  # Original model card
36
 
37
  This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
@@ -155,4 +160,4 @@ for i in range(2,10):
155
 
156
  ---
157
 
158
- > [1] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig: Towards a Unified View of Parameter-Efficient Transfer Learning. ICLR 2022
 
17
  * [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GGML)
18
  * [float16 unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)
19
 
20
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
21
 
22
+ llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
23
 
24
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
25
+
26
+ For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
27
 
28
  ## Provided files
29
  | Name | Quant method | Bits | Size | RAM required | Use case |
30
  | ---- | ---- | ---- | ---- | ---- | ----- |
31
+ `gpt4-alpaca-lora_mlp-65B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4-bit. |
32
+ `gpt4-alpaca-lora_mlp-65B.ggmlv3.q4_1.bin` | q4_0 | 4bit | 44.9GB | 47GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
33
+ `gpt4-alpaca-lora_mlp-65B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
34
+ `gpt4-alpaca-lora_mlp-65B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 49.0GB | 51GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. |
35
 
36
  Note: no q8_0 will be provided as HF won't allow uploading of files larger than 50GB :)
37
 
38
+ I am investigating other methods, eg a split ZIP file, and will try to upload this soon.
39
+
40
  # Original model card
41
 
42
  This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
 
160
 
161
  ---
162
 
163
+ > [1] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig: Towards a Unified View of Parameter-Efficient Transfer Learning. ICLR 2022