Edit model card

Model Card for Model ID

Quick Llama 3 8B finetune with ORPO. Demontration that it can be fine tune in 2 hours only. Thanks to Maxime Labonne's notebook:

https://colab.research.google.com/drive/1eHNWg9gnaXErdAa8_mcvjMupbSS6rDvi?usp=sharing

  • Number of training samples from the dataset: 1500 out of 40K
  • Hardware Type: L4
  • Hours of training: 2
  • Cloud Provider: google colab
Downloads last month
14
Safetensors
Model size
8.03B params
Tensor type
FP16
·

Dataset used to train mayacinka/OrpoLlama-3-8B