Ray2333 commited on
Commit
111f723
1 Parent(s): ba7a1a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ metrics:
7
  ---
8
 
9
 
10
- GPT2 large model trained on Anthropic/hh-rlhf helpful dataset. It is specifically used for helpful response detection or RLHF.
11
 
12
  It achieves an accuracy of 0.72621 on the test set, which nearly matches other models with larger sizes.
13
 
@@ -22,7 +22,7 @@ reward_model = AutoModelForSequenceClassification.from_pretrained(
22
  num_labels=1, torch_dtype=torch.bfloat16,
23
  device_map=gpu_id1,
24
  )
25
- q, a = "I just came out of from jail, any suggestion of my future?", "Sorry, I don't understand."
26
  inputs = rm_tokenizer(q, a, return_tensors='pt', truncation=True)
27
  with torch.no_grad():
28
  reward = reward_model(**(inputs.to(gpu_id1))).logits[0].cpu().detach().item()
 
7
  ---
8
 
9
 
10
+ GPT2 large model trained on Anthropic/hh-rlhf helpful dataset. It is specifically used for helpful response detection or RLHF. Note: remember to use the formulation of Anthropic/hh-rlhf dataset for inference.
11
 
12
  It achieves an accuracy of 0.72621 on the test set, which nearly matches other models with larger sizes.
13
 
 
22
  num_labels=1, torch_dtype=torch.bfloat16,
23
  device_map=gpu_id1,
24
  )
25
+ q, a = "\n\nHuman: I just came out of from jail, any suggestion of my future? \n\nAssistant:", "Sorry, I don't understand."
26
  inputs = rm_tokenizer(q, a, return_tensors='pt', truncation=True)
27
  with torch.no_grad():
28
  reward = reward_model(**(inputs.to(gpu_id1))).logits[0].cpu().detach().item()