cy948's picture
End of training
5a36957 verified
metadata
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
  - generated_from_trainer
model-index:
  - name: Qwen2.5-Coder-1.5B-Instruct-Airscript
    results: []

Qwen2.5-Coder-1.5B-Instruct-Airscript

This model is a fine-tuned version of Qwen/Qwen2.5-Coder-1.5B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4578

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 30
  • training_steps: 1600

Training results

Training Loss Epoch Step Validation Loss
2.1783 0.0625 100 2.1006
1.8426 0.125 200 1.8535
1.7343 0.1875 300 1.7350
1.6313 0.25 400 1.6520
1.5817 0.3125 500 1.5982
1.5498 0.375 600 1.5604
1.5019 0.4375 700 1.5322
1.4852 0.5 800 1.5103
1.461 0.5625 900 1.4939
1.4483 0.625 1000 1.4820
1.4434 0.6875 1100 1.4723
1.4254 0.75 1200 1.4659
1.4224 0.8125 1300 1.4619
1.4188 0.875 1400 1.4596
1.4245 0.9375 1500 1.4585
1.4172 1.0 1600 1.4578

Framework versions

  • PEFT 0.13.2
  • Transformers 4.45.2
  • Pytorch 2.5.0
  • Datasets 3.0.1
  • Tokenizers 0.20.1