This is a testing model created for RLHF. The reward model used for training is [martin-arguments](https://huggingface.co/annaovesnaatatt/martin-arguments) and was trained on 1000 examples of the [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.