File size: 3,769 Bytes
9a6cd54
 
 
 
 
 
 
 
 
 
 
 
 
 
528c88f
9a6cd54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
language:
  - en
tags:
  - pytorch
  - causal-lm
  - pythia
license: apache-2.0
datasets:
  - Dahoas/synthetic-instruct-gptj-pairwise
---

This model is created by finetuning [`EleutherAI/pythia-12b-deduped`](https://huggingface.co/EleutherAI/pythia-12b-deduped) on the [`Dahoas/synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise).

You can try a [demo](https://cloud.lambdalabs.com/demos/ml/gpt-neox-side-by-side) of the model hosted on [Lambda Cloud](https://lambdalabs.com/service/gpu-cloud).

### Model Details

- Finetuned by: [Lambda](https://lambdalabs.com/)
- Model type: Transformer-based Language Model
- Language: English
- Pre-trained model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
- Dataset: [Dahoas/synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
- Library: [transformers](https://huggingface.co/docs/transformers/index)
- License: Apache 2.0

### Prerequisites

Running inference with the model takes ~24GB of GPU memory.

### Quick Start

```
import torch

from transformers import AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaList

device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")

model_name = "lambdalabs/pythia-12b-deduped-synthetic-instruct"
max_new_tokens = 1536
stop_token = "<|stop|>"


class KeywordsStoppingCriteria(StoppingCriteria):
    def __init__(self, keywords_ids: list):
        self.keywords = keywords_ids

    def __call__(
        self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs
    ) -> bool:
        if input_ids[0][-1] in self.keywords:
            return True
        return False


tokenizer = AutoTokenizer.from_pretrained(
    model_name,
)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_tokens([stop_token])

stop_ids = [tokenizer.encode(w)[0] for w in [stop_token]]
stop_criteria = KeywordsStoppingCriteria(stop_ids)

generator = pipeline(
    "text-generation",
    model=model_name,
    device=device,
    max_new_tokens=max_new_tokens,
    torch_dtype=torch.float16,
    stopping_criteria=StoppingCriteriaList([stop_criteria]),
)

example = "How can I make an omelette."
text = "Question: {}\nAnswer:".format(example)

result = generator(
    text,
    num_return_sequences=1,
)

output = result[0]["generated_text"]

print(output)
```

Output:

```

Question: How can I make an omelette.
Answer:To make an omelette, start by cracking two eggs into a bowl and whisking them together with a pinch of salt and pepper. Heat a non-stick pan over medium-high heat and add a tablespoon of butter. Once the butter has melted, pour in the egg mixture and let it cook for a few minutes until the edges start to turn golden. Then, using a spatula, fold the omelette in half and let it cook for another minute or two. Finally, flip the omelette over and cook for another minute or two until the omelette is cooked through. Serve the omelette with your favorite toppings and enjoy.<|stop|>

```

### Training

The model was trained on the [`Dahoas/synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise). We split the original dataset into the train (first 32000 examples) and validation (the remaining 1144 examples) subsets.

We finetune the model for 4 epoches with the help of deepspeed. This took 8xA100 80GB 17 hours, where we set `batch_size_per_gpu` to `4` (so global batch size is 32), and learning rate to `0.0000025` (with linear decay to zero at the last trainig step). You can find a Weights and Biases record [here](https://wandb.ai/chuanli11/ft-synthetic-instruct-gptj-pairwise-pythia12b-deepspeed?workspace=user-).