Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,51 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
datasets:
|
6 |
+
- togethercomputer/RedPajama-Data-1T
|
7 |
+
- Muennighoff/P3
|
8 |
+
- Muennighoff/natural-instructions
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- gpt_neox
|
12 |
+
- red_pajama
|
13 |
---
|
14 |
+
|
15 |
+
**Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1**
|
16 |
+
|
17 |
+
This will NOT work with llama.cpp as of 5/8/2023. This will ONLY work with the GGML fork in https://github.com/ggerganov/ggml/pull/134, and soon https://github.com/keldenl/gpt-llama.cpp (which uses llama.cpp or ggml).
|
18 |
+
|
19 |
+
# RedPajama-INCITE-Instruct-3B-v1
|
20 |
+
|
21 |
+
RedPajama-INCITE-Instruct-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
|
22 |
+
|
23 |
+
The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios.
|
24 |
+
|
25 |
+
## Model Details
|
26 |
+
- **Developed by**: Together Computer.
|
27 |
+
- **Model type**: Language Model
|
28 |
+
- **Language(s)**: English
|
29 |
+
- **License**: Apache 2.0
|
30 |
+
- **Model Description**: A 2.8B parameter pretrained language model.
|
31 |
+
|
32 |
+
## Prompt Template
|
33 |
+
To prompt the chat model, use a typical instruction format + few shot prompting, for example:
|
34 |
+
```
|
35 |
+
Paraphrase the given sentence into a different sentence.
|
36 |
+
|
37 |
+
Input: Can you recommend some upscale restaurants in New York?
|
38 |
+
Output: What upscale restaurants do you recommend in New York?
|
39 |
+
|
40 |
+
Input: What are the famous places we should not miss in Paris?
|
41 |
+
Output: Recommend some of the best places to visit in Paris?
|
42 |
+
|
43 |
+
Input: Could you recommend some hotels that have cheap price in Zurich?
|
44 |
+
Output:
|
45 |
+
```
|
46 |
+
|
47 |
+
## Which model to download?
|
48 |
+
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
49 |
+
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
50 |
+
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
51 |
+
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|