Text Classification
Transformers
PyTorch
English
deberta-v2
reward-model
reward_model
RLHF
Inference Endpoints
theblackcat102's picture
Update README.md
2d05120
|
raw
history blame
No virus
1.7 kB
metadata
license: mit
datasets:
  - openai/summarize_from_feedback
  - openai/webgpt_comparisons
  - Dahoas/instruct-synthetic-prompt-responses
language:
  - en
metrics:
  - accuracy
tags:
  - reward-model
  - reward_model
  - RLHF

Reward model trained from human feedback

Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.

RM are useful in these domain:

  • QA model evaluation

  • serves as reward score in RLHF

All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)

Performance

Validation split accuracy

Model WebGPT Summary SytheticGPT
electra-large-discriminator 59.30 68.66 99.85
deberta-v3-large 61.13 72.23 99.94
deberta-v3-base 59.07 66.84 99.85

Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer.