Upload README.md
Browse files
README.md
CHANGED
@@ -1,81 +1,86 @@
|
|
1 |
---
|
|
|
|
|
2 |
inference: false
|
3 |
language:
|
4 |
- eng
|
5 |
-
license:
|
|
|
|
|
|
|
6 |
model_type: llama
|
|
|
7 |
tags:
|
8 |
- llama-2
|
9 |
- sft
|
10 |
---
|
11 |
|
12 |
<!-- header start -->
|
13 |
-
|
14 |
-
|
|
|
15 |
</div>
|
16 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
19 |
</div>
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
22 |
</div>
|
23 |
</div>
|
|
|
|
|
24 |
<!-- header end -->
|
25 |
|
26 |
-
#
|
|
|
|
|
27 |
|
28 |
-
|
29 |
|
30 |
-
GGML files
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
|
36 |
-
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
|
37 |
|
38 |
-
|
|
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Repositories available
|
43 |
|
44 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
|
45 |
-
* [2, 3, 4, 5, 6 and 8-bit
|
46 |
-
* [
|
|
|
47 |
|
48 |
-
## Prompt template: Human-
|
49 |
|
50 |
```
|
51 |
### human: {prompt}
|
52 |
|
53 |
### response:
|
54 |
-
```
|
55 |
-
Optional reccomended pre-prompt / system prompt:
|
56 |
-
|
57 |
-
```
|
58 |
-
### human: Interact in conversation to the best of your ability, please be concise, logical, intelligent and coherent.
|
59 |
-
|
60 |
-
### response: Sure! sounds good.
|
61 |
|
62 |
-
### human: {prompt}
|
63 |
-
|
64 |
-
### response:
|
65 |
```
|
66 |
|
67 |
<!-- compatibility_ggml start -->
|
68 |
## Compatibility
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
77 |
-
|
78 |
-
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
|
79 |
|
80 |
## Explanation of the new k-quant methods
|
81 |
<details>
|
@@ -94,43 +99,51 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
94 |
<!-- compatibility_ggml end -->
|
95 |
|
96 |
## Provided files
|
|
|
97 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
98 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
99 |
-
| redmond-puffin-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
100 |
-
| redmond-puffin-13b.ggmlv3.
|
101 |
-
| redmond-puffin-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
102 |
-
| redmond-puffin-13b.ggmlv3.
|
103 |
-
| redmond-puffin-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
|
104 |
-
| redmond-puffin-13b.ggmlv3.
|
105 |
-
| redmond-puffin-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
106 |
-
| redmond-puffin-13b.ggmlv3.
|
107 |
-
| redmond-puffin-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
108 |
-
| redmond-puffin-13b.ggmlv3.
|
109 |
-
| redmond-puffin-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
110 |
-
| redmond-puffin-13b.ggmlv3.
|
111 |
-
| redmond-puffin-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
112 |
-
| redmond-puffin-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
113 |
|
114 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
115 |
|
116 |
## How to run in `llama.cpp`
|
117 |
|
118 |
-
|
|
|
|
|
119 |
|
120 |
```
|
121 |
-
./main -t 10 -ngl 32 -m redmond-puffin-13b.ggmlv3.
|
122 |
```
|
123 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
124 |
|
125 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
126 |
|
|
|
|
|
127 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
128 |
|
|
|
|
|
129 |
## How to run in `text-generation-webui`
|
130 |
|
131 |
-
Further instructions here: [text-generation-webui/docs/llama.cpp
|
132 |
|
133 |
<!-- footer start -->
|
|
|
134 |
## Discord
|
135 |
|
136 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -150,13 +163,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
150 |
* Patreon: https://patreon.com/TheBlokeAI
|
151 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
152 |
|
153 |
-
**Special thanks to**:
|
154 |
|
155 |
-
**Patreon special mentions**:
|
156 |
|
157 |
|
158 |
Thank you to all my generous patrons and donaters!
|
159 |
|
|
|
|
|
160 |
<!-- footer end -->
|
161 |
|
162 |
# Original model card: NousResearch's Redmond Puffin 13B V1.3
|
@@ -168,7 +183,7 @@ Thank you to all my generous patrons and donaters!
|
|
168 |
|
169 |
**The first commercially available language model released by Nous Research!**
|
170 |
|
171 |
-
Redmond-Puffin-13B is
|
172 |
|
173 |
Special thank you to Redmond AI for sponsoring the compute.
|
174 |
|
@@ -178,7 +193,7 @@ Notable mentions for assisting in some of the training issues goes to: Caseus an
|
|
178 |
|
179 |
## Model Training
|
180 |
|
181 |
-
Redmond-Puffin
|
182 |
|
183 |
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
|
184 |
|
@@ -190,7 +205,6 @@ The reccomended model usage is:
|
|
190 |
### human:
|
191 |
|
192 |
### response:
|
193 |
-
|
194 |
```
|
195 |
Optional reccomended pre-prompt / system prompt:
|
196 |
|
@@ -200,11 +214,27 @@ Optional reccomended pre-prompt / system prompt:
|
|
200 |
### response: Sure! sounds good.
|
201 |
```
|
202 |
|
203 |
-
##
|
|
|
|
|
204 |
|
205 |
-
|
206 |
-
Puffin-V1.3 dataset solves this issue and the resulting fixed model has now fully finished training!
|
207 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
208 |
|
209 |
## Notable Features:
|
210 |
|
@@ -234,8 +264,87 @@ Current limitations: Some token mismatch problems have been identified, these ma
|
|
234 |
|
235 |
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
|
236 |
|
237 |
-
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
238 |
|
239 |
-
|
240 |
|
241 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
datasets:
|
3 |
+
- LDJnr/Puffin
|
4 |
inference: false
|
5 |
language:
|
6 |
- eng
|
7 |
+
license: llama2
|
8 |
+
model_creator: NousResearch
|
9 |
+
model_link: https://huggingface.co/NousResearch/Redmond-Puffin-13B
|
10 |
+
model_name: Redmond Puffin 13B V1.3
|
11 |
model_type: llama
|
12 |
+
quantized_by: TheBloke
|
13 |
tags:
|
14 |
- llama-2
|
15 |
- sft
|
16 |
---
|
17 |
|
18 |
<!-- header start -->
|
19 |
+
<!-- 200823 -->
|
20 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
21 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
22 |
</div>
|
23 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
25 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
26 |
</div>
|
27 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
28 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
29 |
</div>
|
30 |
</div>
|
31 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
32 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
33 |
<!-- header end -->
|
34 |
|
35 |
+
# Redmond Puffin 13B V1.3 - GGML
|
36 |
+
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
|
37 |
+
- Original model: [Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
|
38 |
|
39 |
+
## Description
|
40 |
|
41 |
+
This repo contains GGML format model files for [NousResearch's Redmond Puffin 13B V1.3](https://huggingface.co/NousResearch/Redmond-Puffin-13B).
|
42 |
+
|
43 |
+
### Important note regarding GGML files.
|
44 |
+
|
45 |
+
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
|
|
|
|
46 |
|
47 |
+
Please use the GGUF models instead.
|
48 |
+
### About GGML
|
49 |
|
50 |
+
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
51 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
|
52 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
|
53 |
+
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
|
54 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
|
55 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
56 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
57 |
|
58 |
## Repositories available
|
59 |
|
60 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ)
|
61 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGUF)
|
62 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)
|
63 |
+
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
|
64 |
|
65 |
+
## Prompt template: Human-Response2
|
66 |
|
67 |
```
|
68 |
### human: {prompt}
|
69 |
|
70 |
### response:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
|
|
|
|
|
|
72 |
```
|
73 |
|
74 |
<!-- compatibility_ggml start -->
|
75 |
## Compatibility
|
76 |
|
77 |
+
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
|
78 |
|
79 |
+
For support with latest llama.cpp, please use GGUF files instead.
|
80 |
|
81 |
+
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
|
82 |
|
83 |
+
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
|
|
|
|
|
84 |
|
85 |
## Explanation of the new k-quant methods
|
86 |
<details>
|
|
|
99 |
<!-- compatibility_ggml end -->
|
100 |
|
101 |
## Provided files
|
102 |
+
|
103 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
104 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
105 |
+
| [redmond-puffin-13b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
106 |
+
| [redmond-puffin-13b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
107 |
+
| [redmond-puffin-13b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
108 |
+
| [redmond-puffin-13b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
109 |
+
| [redmond-puffin-13b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
|
110 |
+
| [redmond-puffin-13b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
111 |
+
| [redmond-puffin-13b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
112 |
+
| [redmond-puffin-13b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
113 |
+
| [redmond-puffin-13b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
114 |
+
| [redmond-puffin-13b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.15 GB| 11.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
115 |
+
| [redmond-puffin-13b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
116 |
+
| [redmond-puffin-13b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
117 |
+
| [redmond-puffin-13b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
118 |
+
| [redmond-puffin-13b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML/blob/main/redmond-puffin-13b.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
119 |
|
120 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
121 |
|
122 |
## How to run in `llama.cpp`
|
123 |
|
124 |
+
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
|
125 |
+
|
126 |
+
For compatibility with latest llama.cpp, please use GGUF files instead.
|
127 |
|
128 |
```
|
129 |
+
./main -t 10 -ngl 32 -m redmond-puffin-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### human: Write a story about llamas\n\n### response:"
|
130 |
```
|
131 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
132 |
|
133 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
134 |
|
135 |
+
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
|
136 |
+
|
137 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
138 |
|
139 |
+
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
140 |
+
|
141 |
## How to run in `text-generation-webui`
|
142 |
|
143 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
|
144 |
|
145 |
<!-- footer start -->
|
146 |
+
<!-- 200823 -->
|
147 |
## Discord
|
148 |
|
149 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
163 |
* Patreon: https://patreon.com/TheBlokeAI
|
164 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
165 |
|
166 |
+
**Special thanks to**: Aemon Algiz.
|
167 |
|
168 |
+
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
|
169 |
|
170 |
|
171 |
Thank you to all my generous patrons and donaters!
|
172 |
|
173 |
+
And thank you again to a16z for their generous grant.
|
174 |
+
|
175 |
<!-- footer end -->
|
176 |
|
177 |
# Original model card: NousResearch's Redmond Puffin 13B V1.3
|
|
|
183 |
|
184 |
**The first commercially available language model released by Nous Research!**
|
185 |
|
186 |
+
Redmond-Puffin-13B is likely the worlds first llama-2 based, fine-tuned language models, leveraging a hand curated set of 3K high quality examples, many of which take full advantage of the 4096 context length of Llama 2. This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha.
|
187 |
|
188 |
Special thank you to Redmond AI for sponsoring the compute.
|
189 |
|
|
|
193 |
|
194 |
## Model Training
|
195 |
|
196 |
+
Redmond-Puffin 13B-V1.3 is a new model trained for multiple epochs on a dataset of 3,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4.
|
197 |
|
198 |
Additional data came from carefully curated sub sections of datasets such as CamelAI's Physics, Chemistry, Biology and Math.
|
199 |
|
|
|
205 |
### human:
|
206 |
|
207 |
### response:
|
|
|
208 |
```
|
209 |
Optional reccomended pre-prompt / system prompt:
|
210 |
|
|
|
214 |
### response: Sure! sounds good.
|
215 |
```
|
216 |
|
217 |
+
## When should I use Puffin or Hermes 2?
|
218 |
+
|
219 |
+
Puffin and Hermes-2 both beat previous SOTA for GPT4ALL benchmarks, with Hermes-2 winning by a 0.1% margin over Puffin.
|
220 |
|
221 |
+
- Hermes 2 is trained on purely single turn instruction examples.
|
|
|
222 |
|
223 |
+
- Puffin is trained mostly on multi-turn, long context, highly curated and cleaned GPT-4 conversations with real humans, as well as curated single-turn examples relating to Physics, Bio, Math and Chem.
|
224 |
+
|
225 |
+
For these reasons, it's reccomended to give Puffin a try if you want to have multi-turn conversations and/or long context communication.
|
226 |
+
|
227 |
+
## Example Outputs!:
|
228 |
+
|
229 |
+
![puffin](https://i.imgur.com/P0MsN8B.png)
|
230 |
+
|
231 |
+
![puffin](https://i.imgur.com/8EO3ThV.png)
|
232 |
+
|
233 |
+
![puffin](https://i.imgur.com/5IWolFw.png)
|
234 |
+
|
235 |
+
![puffin](https://i.imgur.com/TQui8m7.png)
|
236 |
+
|
237 |
+
![puffin](https://i.imgur.com/tderIfl.png)
|
238 |
|
239 |
## Notable Features:
|
240 |
|
|
|
264 |
|
265 |
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
|
266 |
|
267 |
+
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
|
268 |
+
|
269 |
+
## Benchmarks!
|
270 |
+
|
271 |
+
As of Puffins release, it achieves a new SOTA for the GPT4All benchmarks! Supplanting Hermes for the #1 position!
|
272 |
+
(Rounded to nearest tenth)
|
273 |
+
|
274 |
+
Previous Sota: Hermes - 68.8
|
275 |
+
New Sota: Puffin - 69.9 (+1.1)
|
276 |
+
|
277 |
+
note: After release, Puffin has since had its average GPT4All score beaten by 0.1%, by Nous' very own Model Hermes-2!
|
278 |
+
Latest SOTA w/ Hermes 2- 70.0 (+0.1 over Puffins 69.9 score)
|
279 |
|
280 |
+
That being said, Puffin supplants Hermes-2 for the #1 spot in Arc-E, HellaSwag and Winogrande!
|
281 |
|
282 |
+
Puffin also perfectly ties with Hermes in PIQA, however Hermes-2 still excels in much of Big Bench and AGIEval, so it's highly reccomended you give it a try as well!
|
283 |
+
|
284 |
+
GPT4all :
|
285 |
+
|
286 |
+
```
|
287 |
+
| Task |Version| Metric |Value | |Stderr|
|
288 |
+
|-------------|------:|--------|-----:|---|-----:|
|
289 |
+
|arc_challenge| 0|acc |0.4983|± |0.0146|
|
290 |
+
| | |acc_norm|0.5068|± |0.0146|
|
291 |
+
|arc_easy | 0|acc |0.7980|± |0.0082|
|
292 |
+
| | |acc_norm|0.7757|± |0.0086|
|
293 |
+
|boolq | 1|acc |0.8150|± |0.0068|
|
294 |
+
|hellaswag | 0|acc |0.6132|± |0.0049|
|
295 |
+
| | |acc_norm|0.8043|± |0.0040|
|
296 |
+
|openbookqa | 0|acc |0.3560|± |0.0214|
|
297 |
+
| | |acc_norm|0.4560|± |0.0223|
|
298 |
+
|piqa | 0|acc |0.7954|± |0.0094|
|
299 |
+
| | |acc_norm|0.8069|± |0.0092|
|
300 |
+
|winogrande | 0|acc |0.7245|± |0.0126|
|
301 |
+
```
|
302 |
+
|
303 |
+
|
304 |
+
|
305 |
+
```
|
306 |
+
| Task |Version| Metric |Value | |Stderr|
|
307 |
+
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|
308 |
+
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5368|± |0.0363|
|
309 |
+
|bigbench_date_understanding | 0|multiple_choice_grade|0.7127|± |0.0236|
|
310 |
+
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|
311 |
+
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
|
312 |
+
| | |exact_str_match |0.0000|± |0.0000|
|
313 |
+
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|
314 |
+
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1743|± |0.0143|
|
315 |
+
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4200|± |0.0285|
|
316 |
+
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2900|± |0.0203|
|
317 |
+
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|
318 |
+
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5430|± |0.0111|
|
319 |
+
|bigbench_ruin_names | 0|multiple_choice_grade|0.4442|± |0.0235|
|
320 |
+
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2074|± |0.0128|
|
321 |
+
|bigbench_snarks | 0|multiple_choice_grade|0.5083|± |0.0373|
|
322 |
+
|bigbench_sports_understanding | 0|multiple_choice_grade|0.4970|± |0.0159|
|
323 |
+
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3260|± |0.0148|
|
324 |
+
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2136|± |0.0116|
|
325 |
+
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1326|± |0.0081|
|
326 |
+
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4200|± |0.0285|
|
327 |
+
```
|
328 |
+
|
329 |
+
AGI Eval:
|
330 |
+
|
331 |
+
```
|
332 |
+
| Task |Version| Metric |Value | |Stderr|
|
333 |
+
|------------------------------|------:|--------|-----:|---|-----:|
|
334 |
+
|agieval_aqua_rat | 0|acc |0.2283|± |0.0264|
|
335 |
+
| | |acc_norm|0.2244|± |0.0262|
|
336 |
+
|agieval_logiqa_en | 0|acc |0.2780|± |0.0176|
|
337 |
+
| | |acc_norm|0.3164|± |0.0182|
|
338 |
+
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
|
339 |
+
| | |acc_norm|0.2043|± |0.0266|
|
340 |
+
|agieval_lsat_lr | 0|acc |0.3392|± |0.0210|
|
341 |
+
| | |acc_norm|0.2961|± |0.0202|
|
342 |
+
|agieval_lsat_rc | 0|acc |0.4387|± |0.0303|
|
343 |
+
| | |acc_norm|0.3569|± |0.0293|
|
344 |
+
|agieval_sat_en | 0|acc |0.5874|± |0.0344|
|
345 |
+
| | |acc_norm|0.5194|± |0.0349|
|
346 |
+
|agieval_sat_en_without_passage| 0|acc |0.4223|± |0.0345|
|
347 |
+
| | |acc_norm|0.3447|± |0.0332|
|
348 |
+
|agieval_sat_math | 0|acc |0.3364|± |0.0319|
|
349 |
+
| | |acc_norm|0.2773|± |0.0302|
|
350 |
+
```
|