Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ For more details, read the paper:
|
|
38 |
- **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split. Only the prompts were used.
|
39 |
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
|
40 |
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-13b-preference-mix-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `preference_big_mixture` split.
|
41 |
-
|
42 |
|
43 |
## Input Format
|
44 |
|
|
|
38 |
- **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split. Only the prompts were used.
|
39 |
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
|
40 |
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-13b-preference-mix-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `preference_big_mixture` split.
|
41 |
+
- **Value Model:** The value model trained during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm-value).
|
42 |
|
43 |
## Input Format
|
44 |
|