--- library_name: peft base_model: Qwen/Qwen1.5-0.5B-Chat --- # Qwen1.5-0.5B-Chat with EPFL DPO fine-tuning Qwen1.5-0.5B-Chat DPO fine-tuned on the dataset that consists of open-ended and multiple choice questions from different EPFL courses. ## Model Details ### Model Description The model was developed during the course Modern Natural Language Processing (CS-552). Its aim is to fine-tune the base model (Qwen/Qwen1.5-0.5B-Chat) to accurately answer open-ended and multiple-choice questions from various EPFL courses. - **Developed by:** Emma Lise Boehly, Ahmed Aziz Ben Haj Hmida and Jan Kokla - **Finetuned from model [optional]:** Qwen/Qwen1.5-0.5B-Chat ## Training Details ### Training Data Training data is not publicly available. ### Training Procedure #### Training Hyperparameters - **Training regime:** cDPO with bf16 mixed precision, $\beta=0.2$, $lr=3 \times 10^{-6}$, and $label_smoothing=0.2$ - PEFT 0.10.0