Model Description

This is the meta-llama/Llama-3.2-1B base model fine tuned on the mlabonne/orpo-dpo-mix-40k dataset.

Evaluation Results

We used lm-evalutation-harness from EleutherAI to evaluate this fine-tuned version of meta-llama/Llama-3.2-1B on the 'Hellaswag' benchmark.

Hellaswag

Tasks Version Filter n-shot Metric Value Stderr
hellaswag 1 none 0 acc ↑ 0.4773 ± 0.0050
none 0 acc_norm ↑ 0.6358 ± 0.0048
Downloads last month
12
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for DamiFass/llama3.2-1B-finetuned-on-mlabonne

Finetuned
(178)
this model