Edit model card

This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.

Note: Completely broken. Do not use.

Benchmarks

Average 59.52

ARC 59.47

HellaSwag 82.42

MMLU 62.21

TruthfulQA 40.01

Winogrande 78.3

GSM8K 34.72

Training Details

Duration: ~10-12 hours on one Kaggle T4 with Unsloth

Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit

Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k

Rank: 8

Alpha: 16

Learning rate: 5e-6

Beta: 0.1

Batch size: 8

Epochs: 1

Learning rate scheduler: Linear

Prompt Format: You are a helpful assistant.<s>[INST] PROMPT [/INST]RESPONSE</s> (The start token <s> must be added manually and not automatically)

WanDB Reports image/png

image/png

Downloads last month
355
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including G-reen/EXPERIMENT-DPO-m7b2-1-merged