Edit model card

UTI_L3_1000steps_1e6rate_SFT

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9883

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss
2.5921 0.3333 25 2.4381
1.8551 0.6667 50 1.5631
1.2769 1.0 75 1.1985
1.1027 1.3333 100 1.1215
1.0509 1.6667 125 1.1006
0.9917 2.0 150 1.0852
0.9325 2.3333 175 1.0986
0.9627 2.6667 200 1.0883
0.9724 3.0 225 1.0865
0.7795 3.3333 250 1.1249
0.7455 3.6667 275 1.1105
0.7684 4.0 300 1.1214
0.6135 4.3333 325 1.1762
0.5911 4.6667 350 1.2296
0.6302 5.0 375 1.2176
0.4435 5.3333 400 1.3544
0.4558 5.6667 425 1.3765
0.4538 6.0 450 1.3526
0.2966 6.3333 475 1.5173
0.2836 6.6667 500 1.5129
0.3147 7.0 525 1.4603
0.2252 7.3333 550 1.6120
0.2143 7.6667 575 1.6538
0.1922 8.0 600 1.6461
0.1429 8.3333 625 1.7717
0.1491 8.6667 650 1.8011
0.1707 9.0 675 1.8125
0.1189 9.3333 700 1.8928
0.1274 9.6667 725 1.9053
0.1289 10.0 750 1.9127
0.111 10.3333 775 1.9630
0.1082 10.6667 800 1.9689
0.1139 11.0 825 1.9652
0.1062 11.3333 850 1.9791
0.1071 11.6667 875 1.9866
0.1053 12.0 900 1.9890
0.1087 12.3333 925 1.9848
0.1079 12.6667 950 1.9866
0.0994 13.0 975 1.9883
0.1007 13.3333 1000 1.9883

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.0+cu117
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference API
Input a message to start chatting with tsavage68/UTI_L3_1000steps_1e6rate_SFT.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Finetuned from