theblackcat102 commited on
Commit
844b303
1 Parent(s): 6ec6e37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -4
README.md CHANGED
@@ -9,17 +9,57 @@ widget:
9
  example_title: "Tutorial"
10
  ---
11
 
12
- Supervised Finetuning demonstration.
13
 
14
  Models are finetuned on generated conversation curated from the [Open Assistant](https://github.com/LAION-AI/Open-Assistant).
15
 
16
 
 
 
 
 
17
  ```
18
- ```
 
 
 
 
19
 
 
 
 
20
 
21
- Checkout weights and biases [report](https://api.wandb.ai/report/theblackcat102/8yg0c0r2) for training detail.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
 
24
- Thanks to [BASIC lab](https://basiclab.lab.nycu.edu.tw/Yummy/index.html#) for compute power.
25
 
 
 
9
  example_title: "Tutorial"
10
  ---
11
 
12
+ # Supervised Finetuning demonstration
13
 
14
  Models are finetuned on generated conversation curated from the [Open Assistant](https://github.com/LAION-AI/Open-Assistant).
15
 
16
 
17
+ # Mixing reward model with sampling
18
+
19
+ We can use reward model to rank the best answer using this example code:
20
+
21
  ```
22
+ import torch
23
+ from transformers import AutoModelForSequenceClassification
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM
25
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b-base-finetuned/checkpoint-1000")
26
+ model = AutoModelForCausalLM.from_pretrained("facebook/galactica-1.3b-base-finetuned/checkpoint-1000").eval().half().cuda()
27
 
28
+ reward_name = "theblackcat102/electra-large-reward-model"
29
+ rank_model, rank_tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
30
+ rank_model = rank_model.eval().half().cuda()
31
 
32
+ questions = ["<question>How do I make a resume?<answer>"]
33
+ for question in questions:
34
+ inputs = tokenizer(question, return_tensors="pt", padding=True).to(0)
35
+ if 'token_type_ids' in inputs:
36
+ inputs.pop('token_type_ids')
37
+ outputs = model.generate(**inputs, do_sample=True,
38
+ top_k=60,
39
+ max_length=220,
40
+ num_return_sequences=80,
41
+ early_stopping=True
42
+ )
43
+ print(question)
44
+
45
+ results = []
46
+ for i, beam_output in enumerate(outputs):
47
+ output = tokenizer.decode(beam_output, truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
48
+ question, answer = output.split('<answer>', maxsplit=1)
49
+ answer = answer.split('<question>')[0].replace('<|endoftext|>', '').lstrip().split('<answer>')[0]
50
+ rank_inputs = rank_tokenizer(question, answer, return_tensors="pt", padding=True, max_length=512, truncation=True).to(1)
51
+ score = rank_model(**rank_inputs).logits[0].cpu().detach()
52
+ results.append((answer, score, output))
53
+ full_results[question] = results
54
+ sorted_result = sorted(results, key=lambda x:x[1], reverse=True)
55
+ total_scores += sorted_result[0][1].item()
56
+ print('score',sorted_result[0][1].item())
57
+ print('-----Best rank-----')
58
+ print(sorted_result[0][0])
59
+ print('-------------------')
60
+ ```
61
 
62
 
63
+ Checkout weights and biases [report](https://api.wandb.ai/report/theblackcat102/8yg0c0r2) for training detail.
64
 
65
+ Thanks to [BASIC lab](https://basiclab.lab.nycu.edu.tw/Yummy/index.html#) for compute resource