Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/d
|
|
27 |
|
28 |
| Dataset | Input | Output | No. Domains | Data Format |
|
29 |
| -------------------- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- |
|
30 |
-
| SHP | Reddit post and comments | Aggregate Preference Label
|
31 |
| Anthropic/HH-RLHF | Dialogue history with LLM | Individual Preference Label | 2 (harmful, helpful) | Multi-turn Dialogue |
|
32 |
|
33 |
|
@@ -169,7 +169,9 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
|
|
169 |
### Evaluating
|
170 |
|
171 |
Since it is easier to predict stronger preferences than weaker ones (e.g., preferences with a big difference in comment score), we recommend reporting a performance curve instead of a single number.
|
172 |
-
For example, here is the accuracy curve for a FLAN-T5-xl model trained
|
|
|
|
|
173 |
|
174 |
|
175 |
|
|
|
27 |
|
28 |
| Dataset | Input | Output | No. Domains | Data Format |
|
29 |
| -------------------- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- |
|
30 |
+
| SHP | Reddit post and comments | Aggregate Preference Label with Scores | 18 (cooking, cars, ...) | Question/Answer + Assertion/Response |
|
31 |
| Anthropic/HH-RLHF | Dialogue history with LLM | Individual Preference Label | 2 (harmful, helpful) | Multi-turn Dialogue |
|
32 |
|
33 |
|
|
|
169 |
### Evaluating
|
170 |
|
171 |
Since it is easier to predict stronger preferences than weaker ones (e.g., preferences with a big difference in comment score), we recommend reporting a performance curve instead of a single number.
|
172 |
+
For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary ddata using the suggestions above.
|
173 |
+
|
174 |
+
The orange line is without filtering the training data and the blue line is with training only on preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting:
|
175 |
|
176 |
|
177 |
|