Edit model card

OpenLLama-13B for reward modeling

Usage:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

ckpt = "reciprocate/openllama-13b_rm_oasst-hh"
model = AutoModelForSequenceClassification.from_pretrained(ckpt, load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(ckpt)

model(**tokenizer("ASSISTANT: This sentence is a lie.", return_tensors="pt"))[0].item()

Output:

-1.626953125
Downloads last month
29