File size: 917 Bytes
052ab03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2787455
d60ef35
052ab03
683961f
 
052ab03
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
datasets:
- Anthropic/hh-rlhf
language:
- en
tags:
- rlhf
model-index:
  - name: deberta-v3-large-tasksource-rlhf-reward-model 
    results:
      - task:
          type: text-classification
          name: RLHF
        dataset:
          type: rlhf
          name: Anthropic/hh-rlhf
          split: validation
        metrics:
          - type: accuracy
            value: 0,7516
            verified: true
---
#  Reward model based [`deberta-v3-large-tasksource-nli`](https://huggingface.co/sileod/deberta-v3-large-tasksource-nli) fine-tuned on Anthropic/hh-rlhf
For 1 epoch with 1e-5 learning rate.

The data are described in the paper: [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). 

Validation accuracy is currently the best publicly available reported: 75.16% (vs 69.25% for `OpenAssistant/reward-model-deberta-v3-large-v2`).