Update README.md
Browse files
README.md
CHANGED
@@ -144,8 +144,7 @@ Thank you to all my generous patrons and donaters!
|
|
144 |
|
145 |
# Original model card: NousResearch's Redmond Puffin 13B
|
146 |
|
147 |
-
|
148 |
-
|
149 |
|
150 |
## **Redmond-Puffin-13b (Currently available as a Preview edition)**
|
151 |
|
@@ -177,13 +176,15 @@ The model follows the Vicuna ShareGPT prompt format:
|
|
177 |
|
178 |
## Notable Features:
|
179 |
|
180 |
-
-
|
|
|
|
|
181 |
|
182 |
-
- Pretrained on 2 trillion tokens of text.
|
183 |
|
184 |
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
|
185 |
|
186 |
-
-
|
187 |
|
188 |
## Current Limitations
|
189 |
|
@@ -201,4 +202,4 @@ In the near future we plan on releasing an improved version of the model with th
|
|
201 |
|
202 |
## Benchmarks coming soon
|
203 |
|
204 |
-
benchmarks coming soon!
|
|
|
144 |
|
145 |
# Original model card: NousResearch's Redmond Puffin 13B
|
146 |
|
147 |
+
![puffin](https://i.imgur.com/R2xTHMb.png)
|
|
|
148 |
|
149 |
## **Redmond-Puffin-13b (Currently available as a Preview edition)**
|
150 |
|
|
|
176 |
|
177 |
## Notable Features:
|
178 |
|
179 |
+
- The first Llama-2 based fine-tuned model released by Nous Research.
|
180 |
+
|
181 |
+
- Ability to recall information from upto late 2022 without internet. (ChatGPT cut off date is in 2021)
|
182 |
|
183 |
+
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
|
184 |
|
185 |
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
|
186 |
|
187 |
+
- The first commercially available language model released by Nous Research.
|
188 |
|
189 |
## Current Limitations
|
190 |
|
|
|
202 |
|
203 |
## Benchmarks coming soon
|
204 |
|
205 |
+
benchmarks coming soon!
|