Edit model card

Meta-Llama-3-8B-Instruct-mirage-meta-llama-3-sft-instruct

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the nthakur/mirage-meta-llama-3-sft-instruct dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2431

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.3403 0.0597 200 0.3074
0.3224 0.1195 400 0.2954
0.3055 0.1792 600 0.2886
0.2899 0.2389 800 0.2804
0.3116 0.2987 1000 0.2772
0.3101 0.3584 1200 0.2728
0.2913 0.4182 1400 0.2679
0.2765 0.4779 1600 0.2625
0.2697 0.5376 1800 0.2601
0.2759 0.5974 2000 0.2557
0.264 0.6571 2200 0.2524
0.2705 0.7168 2400 0.2490
0.2694 0.7766 2600 0.2466
0.2639 0.8363 2800 0.2450
0.2598 0.8961 3000 0.2435
0.2483 0.9558 3200 0.2432

Framework versions

  • PEFT 0.10.0
  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for nthakur/Meta-Llama-3-8B-Instruct-mirage-meta-llama-3-sft-instruct

Adapter
(610)
this model

Dataset used to train nthakur/Meta-Llama-3-8B-Instruct-mirage-meta-llama-3-sft-instruct