kawine commited on
Commit
c2da35b
1 Parent(s): ff19d32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -156,11 +156,11 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
156
 
157
  1. **Use a sufficiently large model.** With FLAN-T5-xl, you can get 65-85% percent accuracies depending on the subreddit.
158
  2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
159
- 3. **Preprocess the data**. The total input length should fit under the model's token limit (usually 512 tokens).
160
  Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input.
161
  To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
162
  If this is still over 512 tokens, simply skip the example.
163
- 4. **Train for 1 epoch only**, as the [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests.
164
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
  5. **Training on less data may help.**
166
  Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
 
156
 
157
  1. **Use a sufficiently large model.** With FLAN-T5-xl, you can get 65-85% percent accuracies depending on the subreddit.
158
  2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
159
+ 3. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
160
  Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input.
161
  To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
162
  If this is still over 512 tokens, simply skip the example.
163
+ 4. **Train for fewer epochs.** The [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests training a reward model for only 1 epoch.
164
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
  5. **Training on less data may help.**
166
  Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.