|
--- |
|
datasets: |
|
- PKU-Alignment/PKU-SafeRLHF |
|
language: |
|
- en |
|
tags: |
|
- reinforcement-learning-from-human-feedback |
|
- reinforcement-learning |
|
- beaver |
|
- safety |
|
- llama |
|
- ai-safety |
|
- deepspeed |
|
- rlhf |
|
- alpaca |
|
library_name: safe-rlhf |
|
--- |
|
|
|
# 🦫 Beaver's Reward Model |
|
|
|
## Model Details |
|
|
|
The Beaver reward model is a preference model trained using the [PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. |
|
It can play a role in the safe RLHF algorithm, helping the Beaver model become more helpful. |
|
|
|
- **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team. |
|
- **Model Type:** An auto-regressive language model based on the transformer architecture. |
|
- **License:** Non-commercial license. |
|
- **Fine-tuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca). |
|
|
|
## Model Sources |
|
|
|
- **Repository:** <https://github.com/PKU-Alignment/safe-rlhf> |
|
- **Beaver:** <https://huggingface.co/PKU-Alignment/beaver-7b-v2.0> |
|
- **Dataset:** <https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF> |
|
- **Reward Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v2.0-reward> |
|
- **Cost Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v2.0-cost> |
|
- **Dataset Paper:** <https://arxiv.org/abs/2307.04657> |
|
- **Paper:** <https://arxiv.org/abs/2310.12773> |
|
|
|
## How to Use the Reward Model |
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer |
|
from safe_rlhf.models import AutoModelForScore |
|
|
|
model = AutoModelForScore.from_pretrained('PKU-Alignment/beaver-7b-v2.0-reward', torch_dtype=torch.bfloat16, device_map='auto') |
|
tokenizer = AutoTokenizer.from_pretrained('PKU-Alignment/beaver-7b-v2.0-reward') |
|
|
|
input = 'BEGINNING OF CONVERSATION: USER: hello ASSISTANT:Hello! How can I help you today?' |
|
|
|
input_ids = tokenizer(input, return_tensors='pt') |
|
output = model(**input_ids) |
|
print(output) |
|
|
|
# ScoreModelOutput( |
|
# scores=tensor([[[-5.5000], |
|
# [-0.1650], |
|
# [-4.0625], |
|
# [-0.0522], |
|
# [-1.0859], |
|
# [-0.4277], |
|
# [-2.3750], |
|
# [-2.5781], |
|
# [-1.0859], |
|
# [-1.1250], |
|
# [-0.3809], |
|
# [-1.0000], |
|
# [-1.2344], |
|
# [-0.7344], |
|
# [-1.3438], |
|
# [-1.2578], |
|
# [-0.4883], |
|
# [-1.1953], |
|
# [-1.1953], |
|
# [ 0.0908], |
|
# [-0.8164], |
|
# [ 0.1147], |
|
# [-0.1650], |
|
# [-0.4238], |
|
# [ 0.3535], |
|
# [ 1.2969], |
|
# [ 0.7461], |
|
# [ 1.8203]]], grad_fn=<ToCopyBackward0>), |
|
# end_scores=tensor([[1.8203]], grad_fn=<ToCopyBackward0>), |
|
# last_hidden_state=tensor([[[ 0.4766, -0.1787, -0.5312, ..., -0.0194, 0.2773, 0.7500], |
|
# [ 0.5625, 2.0000, 0.8438, ..., 1.8281, 1.0391, -0.6914], |
|
# [ 0.6484, 0.0388, -0.7227, ..., -0.4688, 0.2754, -1.4688], |
|
# ..., |
|
# [ 0.2598, 0.6758, -0.6289, ..., -1.0234, 0.5898, 1.4375], |
|
# [ 1.7500, -0.0913, -1.1641, ..., -0.8438, 0.4199, 0.8945], |
|
# [ 1.8516, -0.0684, -1.1094, ..., 0.1885, 0.4980, 1.1016]]], |
|
# dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>), |
|
# end_last_hidden_state=tensor([[ 1.8516, -0.0684, -1.1094, ..., 0.1885, 0.4980, 1.1016]], |
|
# dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>), |
|
# end_index=tensor([27]) |
|
# ) |
|
``` |
|
|