RuiyangSun
commited on
Commit
•
8def050
1
Parent(s):
bcc4f5e
docs: update readme
Browse files
README.md
CHANGED
@@ -1,3 +1,85 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
datasets:
|
3 |
+
- PKU-Alignment/PKU-SafeRLHF
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- reinforcement-learning-from-human-feedback
|
8 |
+
- reinforcement-learning
|
9 |
+
- beaver
|
10 |
+
- safety
|
11 |
+
- llama
|
12 |
+
- ai-safety
|
13 |
+
- deepspeed
|
14 |
+
- rlhf
|
15 |
+
- alpaca
|
16 |
+
library_name: safe-rlhf
|
17 |
---
|
18 |
+
|
19 |
+
# 🦫 Beaver's Reward Model
|
20 |
+
|
21 |
+
## Model Details
|
22 |
+
|
23 |
+
The Beaver reward model is a preference model trained using the [PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF dataset).
|
24 |
+
It can play a role in the safe RLHF algorithm, helping the Beaver model become more helpful.
|
25 |
+
|
26 |
+
- **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
|
27 |
+
- **Model Type:** An auto-regressive language model based on the transformer architecture.
|
28 |
+
- **License:** Non-commercial license.
|
29 |
+
- **Fine-tuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
|
30 |
+
|
31 |
+
## Model Sources
|
32 |
+
|
33 |
+
- **Repository:** <https://github.com/PKU-Alignment/safe-rlhf>
|
34 |
+
- **Beaver:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0>
|
35 |
+
- **Dataset:** <https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF>
|
36 |
+
- **Reward Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-reward>
|
37 |
+
- **Paper:** *Coming soon...*
|
38 |
+
|
39 |
+
## How to Use the Reward Model
|
40 |
+
|
41 |
+
```python
|
42 |
+
from transformers import AutoTokenizer
|
43 |
+
from safe_rlhf.models import AutoModelForScore
|
44 |
+
|
45 |
+
model = AutoModelForScore.from_pretrained('PKU-Alignment/beaver-7b-v1.0-reward', device_map='auto')
|
46 |
+
tokenizer = AutoTokenizer.from_pretrained('PKU-Alignment/beaver-7b-v1.0-reward', use_fast=False)
|
47 |
+
|
48 |
+
input = 'BEGINNING OF CONVERSATION: USER: hello ASSISTANT:Hello! How can I help you today?'
|
49 |
+
|
50 |
+
input_ids = tokenizer(input, return_tensors='pt')
|
51 |
+
output = model(**input_ids)
|
52 |
+
print(output)
|
53 |
+
|
54 |
+
# ScoreModelOutput(
|
55 |
+
# scores=tensor([[[-19.6476],
|
56 |
+
# [-20.2238],
|
57 |
+
# [-21.4228],
|
58 |
+
# [-19.2506],
|
59 |
+
# [-20.2728],
|
60 |
+
# [-23.8799],
|
61 |
+
# [-22.6898],
|
62 |
+
# [-21.5825],
|
63 |
+
# [-21.0855],
|
64 |
+
# [-20.2068],
|
65 |
+
# [-23.8296],
|
66 |
+
# [-21.4940],
|
67 |
+
# [-21.9484],
|
68 |
+
# [-13.1220],
|
69 |
+
# [ -6.4499],
|
70 |
+
# [ -8.1982],
|
71 |
+
# [ -7.2492],
|
72 |
+
# [ -9.3377],
|
73 |
+
# [-13.5010],
|
74 |
+
# [-10.4932],
|
75 |
+
# [ -9.7837],
|
76 |
+
# [ -6.4540],
|
77 |
+
# [ -6.0084],
|
78 |
+
# [ -5.8093],
|
79 |
+
# [ -6.6134],
|
80 |
+
# [ -5.8995],
|
81 |
+
# [ -9.1505],
|
82 |
+
# [-11.3254]]], grad_fn=<ToCopyBackward0>),
|
83 |
+
# end_scores=tensor([[-11.3254]], grad_fn=<ToCopyBackward0>)
|
84 |
+
# )
|
85 |
+
```
|