Text Generation
Transformers
PyTorch
English
olmo2
conversational
Inference Endpoints
vwxyzjn commited on
Commit
e34ea60
1 Parent(s): c17a3ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ Upon the initial release of OLMo-2 models, we realized the post-trained models d
20
 
21
  ## Release Documentation
22
 
23
- OLMo 2 7B DPO November 2024 is post-trained variant of the [OLMo 2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](allenai/olmo-2-1124-7b-preference-mix).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
  Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
 
20
 
21
  ## Release Documentation
22
 
23
+ OLMo 2 7B DPO November 2024 is post-trained variant of the [OLMo 2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](allenai/olmo-2-1124-7b-preference-mix).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
  Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26