Edit model card

This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.

Benchmarks

Average 59.55

ARC 59.56

HellaSwag 82.39

MMLU 62.3

TruthfulQA 40.04

Winogrande 78.45

GSM8K 34.57

Training Details

Duration: ~6-8 hours on one Kaggle T4 with Unsloth

Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit

Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k

Rank: 8

Alpha: 16

Learning rate: 5e-6

Batch size: 8

Epochs: 1

Learning rate scheduler: Linear

Prompt Format: ChatML

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Why is the sky blue?<|im_end|>
<|im_start|>assistant

WanDB Reports

image/png

Downloads last month
1,568
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·

Collection including G-reen/EXPERIMENT-SFT-m7b2-3-merged