File size: 3,185 Bytes
7cf10f8
d022628
 
 
 
82dd366
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8a680d
 
 
 
 
 
 
7cf10f8
d022628
 
 
 
 
82dd366
d022628
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a95517
d022628
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a95517
 
d022628
 
 
 
 
 
 
 
 
 
 
 
 
82dd366
d022628
82dd366
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: afl-3.0
language:
- en
widget:
- text: >-
    <prefix>You are a helpful chatbot name Stan</prefix><human>What's my
    name?<bot>
  example_title: Who am I?
- text: >-
    <prefix>You are a helpful chatbot name Stan</prefix><human>How to make a
    campfire<bot>
  example_title: Tutorial
datasets:
- Dahoas/instruct-synthetic-prompt-responses
- openai/webgpt_comparisons
- squad_v2
- samsum
- allenai/soda
- xsum
- multi_news
- cnn_dailymail
- scitldr
- billsum
pipeline_tag: text-generation
tags:
- finance
---

# Supervised Finetuning demonstration

Models are finetuned on generated conversation curated from the [Open Assistant](https://github.com/LAION-AI/Open-Assistant).

**This model was finetune for only 2,000 iterations, uploaded for ease of sharing only.**

# Mixing reward model with sampling

We can use reward model to rank the best answer using this example code:

```
import torch
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("theblackcat102/galactica-1.3b-v2")
model = AutoModelForCausalLM.from_pretrained("theblackcat102/galactica-1.3b-v2").eval().half().cuda()

reward_name = "OpenAssistant/reward-model-deberta-v3-large"
rank_model, rank_tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
rank_model = rank_model.eval().half().cuda()

questions = ["<prefix>You are a helpful chatbot call Agnes</prefix><human>How do I make a resume?<bot>"]
for question in questions:
    inputs = tokenizer(question, return_tensors="pt", padding=True).to(0)
    if 'token_type_ids' in inputs:
        inputs.pop('token_type_ids')
    outputs = model.generate(**inputs, do_sample=True,
        top_k=60,
        max_length=220,
        num_return_sequences=80, 
        early_stopping=True
    )
    print(question)

    results = []
    for i, beam_output in enumerate(outputs):
        output = tokenizer.decode(beam_output, truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
        question, answer = output.split('<bot>', maxsplit=1)
        answer = answer.split('</s>')[0].replace('<|endoftext|>', '').lstrip().split('<bot>')[0]
        rank_inputs = rank_tokenizer(question, answer, return_tensors="pt", padding=True, max_length=512, truncation=True).to(1)
        score = rank_model(**rank_inputs).logits[0].cpu().detach()
        results.append((answer, score, output))
    full_results[question] = results
    sorted_result = sorted(results, key=lambda x:x[1], reverse=True)
    total_scores += sorted_result[0][1].item()
    print('score',sorted_result[0][1].item())
    print('-----Best rank-----')
    print(sorted_result[0][0])
    print('-------------------')
```


This work is done under the [Open Assistant](https://github.com/LAION-AI/Open-Assistant) initiative which democratize open source AI assistant. Feel free to join discord and contribute to github!

Thanks to [BASIC lab](https://basiclab.lab.nycu.edu.tw/Yummy/index.html#) for compute resource. BASIC Lab is an academic research lab which focuses in multi-modality learning and data mining domain.