TheBloke commited on
Commit
3b0c1c0
1 Parent(s): 50e9372

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -29,9 +29,11 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
29
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
30
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
31
 
 
 
32
  ## Repositories available
33
 
34
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GPTQ)
35
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GGML)
36
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)
37
 
@@ -43,6 +45,7 @@ Below is an instruction that describes a task. Write a response that appropriate
43
  ### Instruction: {prompt}
44
 
45
  ### Response:
 
46
  ```
47
 
48
  <!-- compatibility_ggml start -->
@@ -78,21 +81,20 @@ Refer to the Provided Files table below to see what files use which methods, and
78
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
79
  | ---- | ---- | ---- | ---- | ---- | ----- |
80
  | open-llama-7b-v2-open-instruct.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
81
- | open-llama-7b-v2-open-instruct.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
82
- | open-llama-7b-v2-open-instruct.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
83
  | open-llama-7b-v2-open-instruct.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
84
- | open-llama-7b-v2-open-instruct.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
85
- | open-llama-7b-v2-open-instruct.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
86
  | open-llama-7b-v2-open-instruct.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
87
  | open-llama-7b-v2-open-instruct.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
88
- | open-llama-7b-v2-open-instruct.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
89
- | open-llama-7b-v2-open-instruct.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
90
  | open-llama-7b-v2-open-instruct.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
91
  | open-llama-7b-v2-open-instruct.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
 
 
92
  | open-llama-7b-v2-open-instruct.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
93
  | open-llama-7b-v2-open-instruct.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
94
 
95
-
96
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
97
 
98
  ## How to run in `llama.cpp`
 
29
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
30
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
31
 
32
+ These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
33
+
34
  ## Repositories available
35
 
36
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GPTQ)
37
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-7B-v2-open-instruct-GGML)
38
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)
39
 
 
45
  ### Instruction: {prompt}
46
 
47
  ### Response:
48
+
49
  ```
50
 
51
  <!-- compatibility_ggml start -->
 
81
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
82
  | ---- | ---- | ---- | ---- | ---- | ----- |
83
  | open-llama-7b-v2-open-instruct.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
84
  | open-llama-7b-v2-open-instruct.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
85
+ | open-llama-7b-v2-open-instruct.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
86
+ | open-llama-7b-v2-open-instruct.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
87
  | open-llama-7b-v2-open-instruct.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
88
  | open-llama-7b-v2-open-instruct.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
89
+ | open-llama-7b-v2-open-instruct.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
90
+ | open-llama-7b-v2-open-instruct.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
91
  | open-llama-7b-v2-open-instruct.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
92
  | open-llama-7b-v2-open-instruct.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
93
+ | open-llama-7b-v2-open-instruct.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
94
+ | open-llama-7b-v2-open-instruct.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
95
  | open-llama-7b-v2-open-instruct.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
96
  | open-llama-7b-v2-open-instruct.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
97
 
 
98
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
99
 
100
  ## How to run in `llama.cpp`