Text Generation
Transformers
English
llama
TheBloke commited on
Commit
7b35bc9
1 Parent(s): 6018ae9

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +16 -16
README.md CHANGED
@@ -50,10 +50,10 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
50
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML)
51
  * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
52
 
53
- ## Prompt template: TBC
54
 
55
  ```
56
- Info on prompt template will be added shortly.
57
  ```
58
 
59
  <!-- compatibility_ggml start -->
@@ -89,20 +89,20 @@ Refer to the Provided Files table below to see what files use which methods, and
89
 
90
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
91
  | ---- | ---- | ---- | ---- | ---- | ----- |
92
- | [openorcaxopenchat-preview2-13b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
93
- | [openorcaxopenchat-preview2-13b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
94
- | [openorcaxopenchat-preview2-13b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
95
- | [openorcaxopenchat-preview2-13b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
96
- | [openorcaxopenchat-preview2-13b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
97
- | [openorcaxopenchat-preview2-13b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
98
- | [openorcaxopenchat-preview2-13b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
99
- | [openorcaxopenchat-preview2-13b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
100
- | [openorcaxopenchat-preview2-13b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
101
- | [openorcaxopenchat-preview2-13b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
102
- | [openorcaxopenchat-preview2-13b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
103
- | [openorcaxopenchat-preview2-13b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
104
- | [openorcaxopenchat-preview2-13b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
105
- | [openorcaxopenchat-preview2-13b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML/blob/main/openorcaxopenchat-preview2-13b.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
106
 
107
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
108
 
 
50
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML)
51
  * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B)
52
 
53
+ ## Prompt template: OpenChat Llama2 V1
54
 
55
  ```
56
+ User: {prompt}<|end_of_turn|>Assistant:
57
  ```
58
 
59
  <!-- compatibility_ggml start -->
 
89
 
90
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
91
  | ---- | ---- | ---- | ---- | ---- | ----- |
92
+ | openorcaxopenchat-preview2-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
93
+ | openorcaxopenchat-preview2-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
94
+ | openorcaxopenchat-preview2-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
95
+ | openorcaxopenchat-preview2-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
96
+ | openorcaxopenchat-preview2-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
97
+ | openorcaxopenchat-preview2-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
98
+ | openorcaxopenchat-preview2-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
99
+ | openorcaxopenchat-preview2-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
100
+ | openorcaxopenchat-preview2-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
101
+ | openorcaxopenchat-preview2-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
102
+ | openorcaxopenchat-preview2-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
103
+ | openorcaxopenchat-preview2-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
104
+ | openorcaxopenchat-preview2-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
105
+ | openorcaxopenchat-preview2-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
106
 
107
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
108