reciprocate commited on
Commit
fde23f6
1 Parent(s): ef72194

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ GPT-J (with value head weights) trained on HH with PPO following [@reciprocated's](https://github.com/reciprocated) `trlx` example [here](https://github.com/CarperAI/trlx/blob/2f90ba0ecd640ae18cd62adb5e934a4b779f534b/examples/hh/ppo_hh.py).
8
+
9
+ - Dataset: [Dahoas/full-hh-rlhf](https://huggingface.co/datasets/Dahoas/full-hh-rlhf)
10
+ - Logs: https://wandb.ai/sorry/trlx/runs/itvi8qrn
11
+
12
+ Usage:
13
+
14
+ ```python
15
+ from transformers import AutoTokenizer
16
+ from trlx.models.modeling_ppo import AutoModelForCausalLMWithHydraValueHead
17
+
18
+ model = AutoModelForCausalLM.from_pretrained("reciprocate/ppo_hh_gpt-j", revision="checkpoint_05000")
19
+ ## there are 01000-02000-03000-...-10000 checkpoints total
20
+ ## if you need access to PPO value head, use:
21
+ # model = AutoModelForCausalLMWithHydraValueHead.from_pretrained("reciprocate/ppo_hh_gpt-j", revision="checkpoint_05000")
22
+ ## the base model for reference:
23
+ # model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
24
+
25
+ tokenizer = AutoTokenizer.from_pretrained("gpt2")
26
+ tokenizer.pad_token = tokenizer.eos_token
27
+ tokenizer.padding_side = "left"
28
+
29
+ prompt_1 = """\
30
+ Human: Hello, can you help me?
31
+ Assistant: Sure, what can I do for you?
32
+ Human: I'm looking for a good recipe for a strawberry cake. What ingredients do I need?
33
+ Assistant:\
34
+ """
35
+ prompt_2 = """\
36
+ Human: Hi! What kind of music do you like?
37
+ Assistant: I like all kinds of music.
38
+ Human: I'm trying to learn how to play the guitar. Do you have any tips?
39
+ Assistant:\
40
+ """
41
+ prompts = [prompt_1, prompt_2]
42
+ inputs = tokenizer(
43
+ [prompt_1, prompt_2],
44
+ return_tensors="pt",
45
+ padding=True,
46
+ )
47
+
48
+ samples = model.generate(
49
+ **inputs,
50
+ max_new_tokens=64,
51
+ top_k=0,
52
+ top_p=1.0,
53
+ do_sample=True,
54
+ )
55
+
56
+ responses = []
57
+ prompt_tokens_lengths = [len(tokenizer.encode(prompt)) for prompt in [prompt_1, prompt_2]]
58
+
59
+ stop_sequences = ["Human:", "human:", "Assistant:", "assistant:"]
60
+ for i, sample in enumerate(samples):
61
+ response = tokenizer.decode(sample[prompt_tokens_lengths[i]:], skip_special_tokens=True)
62
+ # Trim off extra dialogue
63
+ for stop in stop_sequences:
64
+ stop_i = response.find(stop)
65
+ if stop_i >= 0:
66
+ response = response[:stop_i].rstrip()
67
+ responses.append(response)
68
+
69
+ print()
70
+ for prompt, response in zip(prompts, responses):
71
+ print("=" * 40)
72
+ print(prompt + response)
73
+ print("=" * 40)
74
+ print()
75
+
76
+ ```
77
+
78
+ Output:
79
+ ```
80
+ ========================================
81
+ Human: Hello, can you help me?
82
+ Assistant: Sure, what can I do for you?
83
+ Human: I'm looking for a good recipe for a strawberry cake. What ingredients do I need?
84
+ Assistant: What kind of cake are you looking for? Filled baked? Chewy, creamy?
85
+ ========================================
86
+
87
+ ========================================
88
+ Human: Hi! What kind of music do you like?
89
+ Assistant: I like all kinds of music.
90
+ Human: I'm trying to learn how to play the guitar. Do you have any tips?
91
+ Assistant: I will give you some tips. The first thing is to practice.. Guitar can be frustrating. Being frustrated with yourself. but if you practice. you will get good. You can practice yourself. Only and only practice. Just practice. Everything. Audio. Music. Visual. Everything. practice. practice.
92
+ ========================================
93
+ ```
94
+
95
+ Model card borrowed from https://huggingface.co/jon-tow/hh-gpt-j