Edit model card

Llama2-13B-RLHF-RM

Description:

Llama2-13B-RLHF-RM is a 13 billion parameter language model (with context of up to 4,096 tokens) used as the Reward Model in training NV-Llama2-70B-RLHF-Chat, which achieves 7.59 on MT-Bench and demonstrates strong performance on academic benchmarks.

Starting from Llama2-13B base model, it is first instruction-tuned with NVIDIA SFT Datablend v1 [^1] and then trained on HH-RLHF dataset with reward modeling objective. Given a conversation with multiple turns between user and assistant, it assigns a preference score on the last assistant turn.

Llama2-13B-RLHF-RM is trained with NVIDIA NeMo-Aligner, a scalable toolkit for performant and efficient model alignment. NeMo-Aligner is built using the NeMo Framework which allows for scaling training up to 1000s of GPUs using tensor, data and pipeline parallelism for all components of alignment. All of our checkpoints are cross compatible with the NeMo ecosystem, allowing for inference deployment and further customization.

[^1]: as well as ~5k proprietary datapoints that we are unable to release due to data vendor restrictions

Usage:

Training a reward model is an essential component of Reinforcement Learning from Human Feedback (RLHF). By developing a strong reward model, we can mitigate the risks of reward hacking and ensure that the actor is incentivized to produce helpful responses. We are open-sourcing this reward model so that users can seamlessly integrate it with Proximal Policy Optimization (PPO) training using NeMo-Aligner. For detailed instructions on how to conduct the training, please refer to our RLHF training user guide.

Downloads last month
25
Inference Examples
Inference API (serverless) has been turned off for this model.

Datasets used to train nvidia/NV-Llama2-13B-RLHF-RM

Collection including nvidia/NV-Llama2-13B-RLHF-RM