Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,9 @@ datasets:
|
|
7 |
- OpenAssistant/oasst1
|
8 |
- databricks/databricks-dolly-15k
|
9 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
**Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1**
|
@@ -37,4 +40,4 @@ To prompt the chat model, use the following format:
|
|
37 |
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
38 |
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
39 |
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
40 |
-
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|
|
|
7 |
- OpenAssistant/oasst1
|
8 |
- databricks/databricks-dolly-15k
|
9 |
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- gpt_neox
|
12 |
+
- red_pajama
|
13 |
---
|
14 |
|
15 |
**Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1**
|
|
|
40 |
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
41 |
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
42 |
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
43 |
+
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|