We train a collection of models under RLHF on the above datasets. We use DPO for hh-rlhf and unalignment, and train a PPO on completing IMDB prefixes with positive sentiment.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.