Safetensors
English
File size: 289 Bytes
9cbba88
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
---
license: mit
datasets:
- unalignment/toxic-dpo-v0.2
- Anthropic/hh-rlhf
- stanfordnlp/imdb
language:
- en
---

We train a collection of models under RLHF on the above datasets. We use DPO for hh-rlhf and unalignment, and train a PPO on completing IMDB prefixes with positive sentiment.