Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ metrics:
|
|
6 |
- accuracy
|
7 |
---
|
8 |
|
9 |
-
GPT2 large model trained on Anthropic/hh-rlhf harmless dataset
|
10 |
|
11 |
Note: 1. Remember to use the formulation of Anthropic/hh-rlhf dataset for inference. 2. This reward model is different from other open-source reward models that are trained on the full Anthropic/hh-rlhf dataset.
|
12 |
|
|
|
6 |
- accuracy
|
7 |
---
|
8 |
|
9 |
+
GPT2 large model trained on **Anthropic/hh-rlhf harmless dataset**. It is specifically used for harmful response detection or RLHF. It achieves an accuracy of **0.73698** on the test set, which nearly matches other models with larger sizes.
|
10 |
|
11 |
Note: 1. Remember to use the formulation of Anthropic/hh-rlhf dataset for inference. 2. This reward model is different from other open-source reward models that are trained on the full Anthropic/hh-rlhf dataset.
|
12 |
|