Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,153 @@
|
|
1 |
-
---
|
2 |
-
license: other
|
3 |
-
license_name: nvidia-open-model-license
|
4 |
-
license_link: LICENSE
|
5 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: nvidia-open-model-license
|
4 |
+
license_link: LICENSE
|
5 |
+
---
|
6 |
+
|
7 |
+
## Nemotron-4-340B-Reward
|
8 |
+
|
9 |
+
[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)[![Model size](https://img.shields.io/badge/Params-340B-green)](#model-architecture)[![Language](https://img.shields.io/badge/Language-Multilingual-green)](#datasets)
|
10 |
+
|
11 |
+
|
12 |
+
### Model Overview
|
13 |
+
|
14 |
+
The Nemotron-4-340B-Reward is a multi-dimensional Reward Model that can be used as part of a synthetic data generation pipeline to create training data that helps researchers and developers build their own LLMs; Nemotron-4-340B-Reward consists of the Nemotron-4-340B-Base model and a linear layer that converts the final layer representation of the end-of-response token into five scalar values, each corresponding to a [HelpSteer](https://arxiv.org/abs/2311.09528) attribute.
|
15 |
+
|
16 |
+
Given a conversation with multiple turns between user and assistant, it rates the following attributes (typically between 0 and 4) for every assistant turn.
|
17 |
+
|
18 |
+
1. **Helpfulness**: Overall helpfulness of the response to the prompt.
|
19 |
+
2. **Correctness**: Inclusion of all pertinent facts without errors.
|
20 |
+
3. **Coherence**: Consistency and clarity of expression.
|
21 |
+
4. **Complexity**: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
|
22 |
+
5. **Verbosity**: Amount of detail included in the response, relative to what is asked for in the prompt.
|
23 |
+
|
24 |
+
Nonetheless, if you are only interested in using it as a conventional reward model that outputs a singular scalar, we recommend using the weights ```[0, 0, 0, 0, 0.3, 0.74, 0.46, 0.47, -0.33]``` to do elementwise multiplication with the predicted attributes (which outputs 9 float values in line with [Llama2-13B-SteerLM-RM](https://huggingface.co/nvidia/Llama2-13B-SteerLM-RM) but the first four are not trained or used)
|
25 |
+
|
26 |
+
Under the NVIDIA Open Model License, NVIDIA confirms:
|
27 |
+
Models are commercially usable.
|
28 |
+
You are free to create and distribute Derivative Models.
|
29 |
+
NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
|
30 |
+
|
31 |
+
### License:
|
32 |
+
|
33 |
+
[NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
|
34 |
+
|
35 |
+
### Intended use
|
36 |
+
|
37 |
+
Nemotron-4 340B Reward Model is a pretrained Reward Model intended for use in English Synthetic Data Generation and English Reinforcement Learning from AI Feedback (RLAIF).
|
38 |
+
|
39 |
+
Nemotron-4 340B-Reward can be used in the alignment stage to align pretrained models to human preferences. It can also be used in cases like Reward-Model-as-a-Judge.
|
40 |
+
|
41 |
+
**Model Developer:** NVIDIA
|
42 |
+
|
43 |
+
**Model Input:** Text only
|
44 |
+
**Input Format:** String
|
45 |
+
**Input Parameters:** One-Dimensional (1D)
|
46 |
+
|
47 |
+
**Model Output:** Scalar Values (List of 9 Floats)
|
48 |
+
**Output Format:** Float
|
49 |
+
**Output Parameters:** 1D
|
50 |
+
|
51 |
+
**Model Dates:** Nemotron-4-340B-Reward was trained between December 2023 and May 2024
|
52 |
+
|
53 |
+
**Data Freshness:** The pretraining data has a cutoff of June 2023
|
54 |
+
|
55 |
+
### Required Hardware
|
56 |
+
|
57 |
+
BF16 Inference:
|
58 |
+
- 32x H100 (4x H100 Nodes)
|
59 |
+
- 32x A100 (4x A100 80GB Nodes)
|
60 |
+
|
61 |
+
### Usage:
|
62 |
+
|
63 |
+
You can use the model with [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner) following [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html).
|
64 |
+
|
65 |
+
1. Spin up an inference server within the [NeMo Aligner container](https://github.com/NVIDIA/NeMo-Aligner/blob/main/Dockerfile)
|
66 |
+
|
67 |
+
```
|
68 |
+
python /opt/NeMo-Aligner/examples/nlp/gpt/serve_reward_model.py \
|
69 |
+
rm_model_file=Nemotron-4-340B-Reward \
|
70 |
+
trainer.num_nodes=2 \
|
71 |
+
trainer.devices=8 \
|
72 |
+
++model.tensor_model_parallel_size=8 \
|
73 |
+
++model.pipeline_model_parallel_size=2 \
|
74 |
+
inference.micro_batch_size=2 \
|
75 |
+
inference.port=1424
|
76 |
+
```
|
77 |
+
|
78 |
+
2. Annotate data files using the served reward model. As an example, this can be the Open Assistant train/val files. Then follow the next step to train a SteerLM model based on [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html#step-5-train-the-attribute-conditioned-sft-model) .
|
79 |
+
|
80 |
+
```
|
81 |
+
python /opt/NeMo-Aligner/examples/nlp/data/steerlm/preprocess_openassistant_data.py --output_directory=data/oasst
|
82 |
+
|
83 |
+
python /opt/NeMo-Aligner/examples/nlp/data/steerlm/attribute_annotate.py \
|
84 |
+
--input-file=data/oasst/train.jsonl \
|
85 |
+
--output-file=data/oasst/train_labeled.jsonl \
|
86 |
+
--port=1424
|
87 |
+
```
|
88 |
+
|
89 |
+
3. Alternatively, this can be any conversational data file (in .jsonl) in the following format, where each line looks like
|
90 |
+
|
91 |
+
```
|
92 |
+
{
|
93 |
+
"conversations": [
|
94 |
+
{"value": <user_turn_1>, "from": "User", "label": None},
|
95 |
+
{"value": <assistant_turn_1>, "from": "Assistant", "label": <formatted_label_1>},
|
96 |
+
{"value": <user_turn_2>, "from": "User", "label": None},
|
97 |
+
{"value": <assistant_turn_2>, "from": "Assistant", "label": <formatted_label_2>},
|
98 |
+
],
|
99 |
+
"mask": "User"
|
100 |
+
}
|
101 |
+
```
|
102 |
+
|
103 |
+
Ideally, each ```<formatted_label_n>``` refers to the ground truth label for the assistant turn but if they are not available, we can also use ```helpfulness:4,correctness:4,coherence:4,complexity:2,verbosity:2``` (i.e. defaulting to moderate complexity and verbosity, adjust if needed. or simply ```helpfulness:-1```. It must not be ```None``` or an empty string.
|
104 |
+
|
105 |
+
|
106 |
+
### Model Architecture:
|
107 |
+
|
108 |
+
Nemotron-4-340B-Reward is extended from Nemotron-4-340B-Base with an additional linear layer. It was trained with a global batch-size of 128.
|
109 |
+
|
110 |
+
**Architecture Type:** Transformer Decoder (auto-regressive language model)
|
111 |
+
|
112 |
+
### Intended use
|
113 |
+
|
114 |
+
Nemotron-4-340B-Reward is a pretrained Reward Model intended for use in English Synthetic Data Generation and English Reinforcement Learning from AI Feedback (RLAIF).
|
115 |
+
|
116 |
+
### Dataset & Training
|
117 |
+
|
118 |
+
Nemotron-4-340B-Reward was trained for 2 epochs using the NVIDIA [HelpSteer2](https://arxiv.org/abs/2406.08673) data. The HelpSteer2 dataset is a permissively licensed preference dataset (CC-by-4.0) with ten thousand English response pairs and can be found [here](https://huggingface.co/datasets/nvidia/HelpSteer2).
|
119 |
+
|
120 |
+
### Evaluation Results
|
121 |
+
|
122 |
+
#### Reward Bench Primary Dataset
|
123 |
+
|
124 |
+
Evaluated using RewardBench - as introduced in the paper [RewardBench: Evaluating Reward Models for Language Modeling](https://arxiv.org/abs/2403.13787).
|
125 |
+
|
126 |
+
| Overall | Chat | Chat-Hard | Safety | Reasoning |
|
127 |
+
| --------- | ------- | -------------- | --------- | -------------- |
|
128 |
+
| 92.0 | 95.8 | 87.1 | 91.5 | 93.7 |
|
129 |
+
|
130 |
+
|
131 |
+
### Limitations
|
132 |
+
|
133 |
+
This model was trained using an English dataset, and as such its use is optimized for English language use cases. In order to extend this model to other language domains, fine-tuning will be required.
|
134 |
+
|
135 |
+
### Ethical Considerations:
|
136 |
+
|
137 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards [Insert Link to Model Card++ here]. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
138 |
+
|
139 |
+
|
140 |
+
### Citation
|
141 |
+
|
142 |
+
If you find this model useful, please cite the following works
|
143 |
+
|
144 |
+
```bibtex
|
145 |
+
@misc{wang2024helpsteer2,
|
146 |
+
title={HelpSteer2: Open-source dataset for training top-performing reward models},
|
147 |
+
author={Zhilin Wang and Yi Dong and Olivier Delalleau and Jiaqi Zeng and Gerald Shen and Daniel Egert and Jimmy J. Zhang and Makesh Narsimhan Sreedhar and Oleksii Kuchaiev},
|
148 |
+
year={2024},
|
149 |
+
eprint={2406.08673},
|
150 |
+
archivePrefix={arXiv},
|
151 |
+
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
|
152 |
+
}
|
153 |
+
```
|