Edit model card

🦫 Beaver's Reward Model

Model Details

The Beaver reward model is a preference model trained using the PKU-SafeRLHF dataset. It can play a role in the safe RLHF algorithm, helping the Beaver model become more helpful.

  • Developed by: the PKU-Alignment Team.
  • Model Type: An auto-regressive language model based on the transformer architecture.
  • License: Non-commercial license.
  • Fine-tuned from model: LLaMA, Alpaca.

Model Sources

How to Use the Reward Model

import torch
from transformers import AutoTokenizer
from safe_rlhf.models import AutoModelForScore

model = AutoModelForScore.from_pretrained('PKU-Alignment/beaver-7b-unified-reward', torch_dtype=torch.bfloat16, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained('PKU-Alignment/beaver-7b-unified-reward')

input = 'BEGINNING OF CONVERSATION: USER: hello ASSISTANT:Hello! How can I help you today?'

input_ids = tokenizer(input, return_tensors='pt')
output = model(**input_ids)
print(output)

# ScoreModelOutput(
#     scores=tensor([[[-7.2812],
#          [-0.8203],
#          [-0.3535],
#          [-0.5781],
#          [-0.5781],
#          [-1.2578],
#          [-2.9219],
#          [-2.8594],
#          [-2.0469],
#          [-0.8789],
#          [-1.2422],
#          [-1.5312],
#          [-0.7500],
#          [-1.4688],
#          [-0.9141],
#          [-1.0469],
#          [-1.2266],
#          [-1.4062],
#          [-1.4297],
#          [-1.1016],
#          [-0.9688],
#          [ 0.5977],
#          [ 0.6211],
#          [ 0.4238],
#          [ 0.8906],
#          [ 0.4277],
#          [ 0.6680],
#          [ 0.3789]]], grad_fn=<ToCopyBackward0>),
#     end_scores=tensor([[0.3789]], grad_fn=<ToCopyBackward0>),
#     last_hidden_state=tensor([[[-0.0552, -0.3203, -0.9180,  ...,  0.1719,  0.1309,  0.2988],
#          [-1.9609, -0.2617, -0.7227,  ...,  0.3535,  0.8945,  1.6719],
#          [-1.1016, -0.3984, -0.3398,  ...,  0.5820,  0.9062,  1.6172],
#          ...,
#          [-0.4844,  0.1387, -0.6562,  ...,  0.3789,  0.2910,  1.5625],
#          [-0.3125,  0.0811, -0.7969,  ...,  0.4688,  0.2344,  1.4453],
#          [-0.7148, -0.2139, -0.4336,  ...,  0.9219, -0.1050,  1.3594]]],
#        dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>),
#     end_last_hidden_state=tensor([[-0.7148, -0.2139, -0.4336,  ...,  0.9219, -0.1050,  1.3594]],
#        dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>),
#     end_index=tensor([27])
# )
Downloads last month
31
Safetensors
Model size
6.61B params
Tensor type
BF16
·
I64
·
Video Preview
loading

Dataset used to train PKU-Alignment/beaver-7b-unified-reward