theblackcat102 commited on
Commit
d022628
1 Parent(s): bdf5de5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: afl-3.0
3
+ language:
4
+ - en
5
+ widget:
6
+ - text: "<human>What's my name?<bot>"
7
+ example_title: "Who am I?"
8
+ - text: "<human>How to make a campfire<bot>"
9
+ example_title: "Tutorial"
10
  ---
11
+
12
+ # Supervised Finetuning demonstration
13
+
14
+ Models are finetuned on generated conversation curated from the [Open Assistant](https://github.com/LAION-AI/Open-Assistant).
15
+
16
+
17
+ # Mixing reward model with sampling
18
+
19
+ We can use reward model to rank the best answer using this example code:
20
+
21
+ ```
22
+ import torch
23
+ from transformers import AutoModelForSequenceClassification
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM
25
+ tokenizer = AutoTokenizer.from_pretrained("theblackcat102/galactica-1.3b-v2")
26
+ model = AutoModelForCausalLM.from_pretrained("theblackcat102/galactica-1.3b-v2").eval().half().cuda()
27
+
28
+ reward_name = "OpenAssistant/reward-model-deberta-v3-large"
29
+ rank_model, rank_tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
30
+ rank_model = rank_model.eval().half().cuda()
31
+
32
+ questions = ["<question>How do I make a resume?<answer>"]
33
+ for question in questions:
34
+ inputs = tokenizer(question, return_tensors="pt", padding=True).to(0)
35
+ if 'token_type_ids' in inputs:
36
+ inputs.pop('token_type_ids')
37
+ outputs = model.generate(**inputs, do_sample=True,
38
+ top_k=60,
39
+ max_length=220,
40
+ num_return_sequences=80,
41
+ early_stopping=True
42
+ )
43
+ print(question)
44
+
45
+ results = []
46
+ for i, beam_output in enumerate(outputs):
47
+ output = tokenizer.decode(beam_output, truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
48
+ question, answer = output.split('<answer>', maxsplit=1)
49
+ answer = answer.split('<question>')[0].replace('<|endoftext|>', '').lstrip().split('<answer>')[0]
50
+ rank_inputs = rank_tokenizer(question, answer, return_tensors="pt", padding=True, max_length=512, truncation=True).to(1)
51
+ score = rank_model(**rank_inputs).logits[0].cpu().detach()
52
+ results.append((answer, score, output))
53
+ full_results[question] = results
54
+ sorted_result = sorted(results, key=lambda x:x[1], reverse=True)
55
+ total_scores += sorted_result[0][1].item()
56
+ print('score',sorted_result[0][1].item())
57
+ print('-----Best rank-----')
58
+ print(sorted_result[0][0])
59
+ print('-------------------')
60
+ ```
61
+
62
+
63
+ Checkout weights and biases [report](https://api.wandb.ai/report/theblackcat102/8yg0c0r2) for training detail.
64
+
65
+ Thanks to [BASIC lab](https://basiclab.lab.nycu.edu.tw/Yummy/index.html#) for compute resource. BASIC Lab is an academic research lab which focuses in multi-modality learning and data mining domain.