Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,9 @@ metrics:
|
|
6 |
- accuracy
|
7 |
---
|
8 |
|
9 |
-
GPT2 large model trained on Anthropic/hh-rlhf harmless dataset. It is specifically used for harmful response detection or RLHF. Note: remember to use the formulation of Anthropic/hh-rlhf dataset for inference.
|
10 |
|
11 |
-
|
12 |
|
13 |
## Usage:
|
14 |
```
|
|
|
6 |
- accuracy
|
7 |
---
|
8 |
|
9 |
+
GPT2 large model trained on Anthropic/hh-rlhf harmless dataset. It is specifically used for harmful response detection or RLHF. Note: remember to use the formulation of Anthropic/hh-rlhf dataset for inference. It achieves an accuracy of 0.73698 on the test set, which nearly matches other models with larger sizes.
|
10 |
|
11 |
+
Note: 1. Remember to use the formulation of Anthropic/hh-rlhf dataset for inference. 2. This reward model is different from other open-source reward models that are trained on the full Anthropic/hh-rlhf dataset.
|
12 |
|
13 |
## Usage:
|
14 |
```
|