Update README.md
Browse files
README.md
CHANGED
@@ -4,15 +4,17 @@ language:
|
|
4 |
- en
|
5 |
pipeline_tag: text-generation
|
6 |
base_model:
|
7 |
-
- allenai/OLMo-2-1124-13B-
|
8 |
library_name: transformers
|
|
|
|
|
9 |
---
|
10 |
|
11 |
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
12 |
|
13 |
# OLMo-2-1124-13B-DPO
|
14 |
|
15 |
-
OLMo-2 13B DPO November 2024 is finetuned variant of the [OLMo-2 13B November 2024](https://huggingface.co/allenai/OLMo2-13B-1124) model, which has undergone supervised finetuning on the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) and further DPO training.
|
16 |
Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
|
17 |
Check out [the OLMo-2 paper](https://TODO) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
|
18 |
|
@@ -26,7 +28,7 @@ The core models released in this batch include the following:
|
|
26 |
| **Base Model** | [allenai/OLMo2-7B-1124](https://huggingface.co/allenai/OLMo2-7B-1124) | [allenai/OLMo-2-13B-1124](https://huggingface.co/allenai/OLMo-2-13B-1124) |
|
27 |
| **SFT** | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) |
|
28 |
| **DPO** | [allenai/OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) |
|
29 |
-
| **Final Models (RLVR)**
|
30 |
| **Reward Model (RM)**| [allenai/OLMo-2-1124-7B-RM](https://huggingface.co/allenai/OLMo-2-1124-7B-RM) | (Same as 8B) |
|
31 |
|
32 |
|
|
|
4 |
- en
|
5 |
pipeline_tag: text-generation
|
6 |
base_model:
|
7 |
+
- allenai/OLMo-2-1124-13B-SFT
|
8 |
library_name: transformers
|
9 |
+
datasets:
|
10 |
+
- allenai/olmo-2-1124-13b-preference-mix
|
11 |
---
|
12 |
|
13 |
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
14 |
|
15 |
# OLMo-2-1124-13B-DPO
|
16 |
|
17 |
+
OLMo-2 13B DPO November 2024 is finetuned variant of the [OLMo-2 13B November 2024](https://huggingface.co/allenai/OLMo2-13B-1124) model, which has undergone supervised finetuning on the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix).
|
18 |
Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
|
19 |
Check out [the OLMo-2 paper](https://TODO) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
|
20 |
|
|
|
28 |
| **Base Model** | [allenai/OLMo2-7B-1124](https://huggingface.co/allenai/OLMo2-7B-1124) | [allenai/OLMo-2-13B-1124](https://huggingface.co/allenai/OLMo-2-13B-1124) |
|
29 |
| **SFT** | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) |
|
30 |
| **DPO** | [allenai/OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) |
|
31 |
+
| **Final Models (RLVR)** | [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) | [allenai/OLMo-2-1124-13B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-13B-Instruct) |
|
32 |
| **Reward Model (RM)**| [allenai/OLMo-2-1124-7B-RM](https://huggingface.co/allenai/OLMo-2-1124-7B-RM) | (Same as 8B) |
|
33 |
|
34 |
|