File size: 2,346 Bytes
f8ed3ba
 
7419368
 
 
 
 
 
 
 
 
 
f8ed3ba
7419368
 
 
7c9f0a6
 
7419368
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- Muennighoff/P3
- Muennighoff/natural-instructions
pipeline_tag: text-generation
tags:
- gpt_neox
- red_pajama
---

**Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1**

This will NOT work with llama.cpp as of 5/13/2023, but this NOW works (5/13/2023) with the GGML in https://github.com/ggerganov/ggml/ via gpt-neox
This also works in my project https://github.com/keldenl/gpt-llama.cpp (uses ggml as an InferenceEngine).

# RedPajama-INCITE-Instruct-3B-v1

RedPajama-INCITE-Instruct-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. 

The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios.

## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model.

## Prompt Template
To prompt the chat model, use a typical instruction format + few shot prompting, for example:
```
Paraphrase the given sentence into a different sentence.

Input: Can you recommend some upscale restaurants in New York?
Output: What upscale restaurants do you recommend in New York?

Input: What are the famous places we should not miss in Paris?
Output: Recommend some of the best places to visit in Paris?

Input: Could you recommend some hotels that have cheap price in Zurich?
Output:
```

## Which model to download?
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.