File size: 320 Bytes
6c2b778
 
a45fd4a
 
6c2b778
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
The Llama3-8b-based Reward Model was trained using OpenRLHF and a combination of datasets available at https://huggingface.co/datasets/OpenLLMAI/preference_700K.

Base model: https://huggingface.co/OpenRLHF/Llama-3-8b-sft-mixture

```
Cosine Scheduler
Learning Rate: 9e-6
Warmup Ratio: 0.03
Batch Size: 256
Epoch: 1
```