Ray2333 commited on
Commit
ee6e0f5
1 Parent(s): b622a05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - hendrydong/preference_700K
5
+ pipeline_tag: text-classification
6
+ ---
7
+
8
+ # Introduction
9
+ This is a breward model (based on Gemma-2b-it) trained with BT loss using [hendrydong/preference_700K](https://huggingface.co/datasets/hendrydong/preference_700K) dataset.
10
+
11
+ This reward model is especially useful if you need a good small reward model for LLMs. You can also refer to [Ray2333/GRM-Gemma-2B-sftreg](https://huggingface.co/Ray2333/GRM-Gemma-2B-sftreg) for a better 2B reward model trained with a hidden states regularization.
12
+
13
+ ## Evaluation
14
+ We evaluate this reward model on the [reward model benchmark](https://huggingface.co/spaces/allenai/reward-bench).
15
+
16
+
17
+ | Model | Average | Chat | Chat Hard | Safety | Reasoning |
18
+ |:-------------------------:|:-------------:|:---------:|:---------:|:--------:|:-----------:|
19
+ | [**Ray2333/GRM-Gemma-2B-sftreg**](https://huggingface.co/Ray2333/GRM-Gemma-2B-sftreg)(Ours, 2B) | 75.1 | 95.5 | 48.2 | 80.0 | 76.8 |
20
+ | berkeley-nest/Starling-RM-7B-alpha (7B) | 74.6 | 98 | 43.4 | 88.6 | 74.6 |
21
+ | **Ray2333/Gemma-2B-rewardmodel-baseline**(Ours, 2B) | 73.7 | 94.1 | 46.1 | 79.6 | 75.0 |
22
+ | stabilityai/stablelm-zephyr-3b (3B) | 73.1 | 86.3 | 60.1 | 70.3 | 75.7 |
23
+ | openbmb/UltraRM-13b (13B) | 71.3 | 96.1 | 55.3 | 45.8 | 82 |
24
+
25
+
26
+
27
+ ## Usage
28
+ ```
29
+ import torch
30
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
31
+
32
+ # load model and tokenizer
33
+ tokenizer = AutoTokenizer.from_pretrained('Ray2333/GRM-llama3-8B-distill')
34
+ reward_model = AutoModelForSequenceClassification.from_pretrained(
35
+ 'Ray2333/GRM-llama3-8B-distill',
36
+ num_labels=1, torch_dtype=torch.float16,
37
+ device_map=0,
38
+ )
39
+ message = [
40
+ {'role': 'user', 'content': "I'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?"},
41
+ {'role': 'assistant', 'content': "Sorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?"}
42
+ ]
43
+ message_template = tokenizer.apply_chat_template(message, tokenize=False)
44
+ # it will look like this: "<bos><start_of_turn>user\nI'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?<end_of_turn>\n<start_of_turn>model\nSorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?<end_of_turn>\n".
45
+
46
+ kwargs = {"padding": 'max_length', "truncation": True, "return_tensors": "pt"}
47
+ tokens = tokenizer.encode_plus(message_template, **kwargs)
48
+
49
+ with torch.no_grad():
50
+ reward_tensor = model(tokens["input_ids"][0].to(model.device), attention_mask=tokens["attention_mask"][0].to(model.device)).logits.reshape(-1)
51
+ reward = reward_tensor.cpu().detach().item()
52
+ ```