DPO Training
Collection
It contains Qwen1.5-0.5B-Chat version that has been retrained using EPFL data and ...
•
5 items
•
Updated
Qwen1.5-0.5B-Chat DPO fine-tuned on open-ended and multiple choice questions from different EPFL courses and the Orca Math dataset that consists of ~200K grade school math word problems.
The model was developed during the course Modern Natural Language Processing (CS-552). Its aim is to fine-tune the base model (Qwen/Qwen1.5-0.5B-Chat) to accurately answer open-ended and multiple-choice questions from various EPFL courses and Orca Math dataset.
HuggingFace dataset : microsoft/orca-math-word-problems-200k The EPFL dataset is not publicly available.
Training regime: cDPO with bf16 mixed precision, $\beta=0.2$, $lr=3 \times 10^{-6}$, and $label_smoothing=0.2$
PEFT 0.10.0
Base model
Qwen/Qwen1.5-0.5B-Chat