Safetensors
English
Edit model card

We train a collection of models under RLHF on the above datasets. We use DPO for hh-rlhf and unalignment, and train a PPO on completing IMDB prefixes with positive sentiment.

Downloads last month
0
Unable to determine this model's library. Check the docs .

Datasets used to train amirabdullah19852020/interpreting_reward_models