cy948's picture
End of training
7305ace verified
|
raw
history blame
2.23 kB
metadata
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
  - generated_from_trainer
model-index:
  - name: Qwen2.5-Coder-1.5B-Instruct-Airscript
    results: []

Qwen2.5-Coder-1.5B-Instruct-Airscript

This model is a fine-tuned version of Qwen/Qwen2.5-Coder-1.5B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4578

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 30
  • training_steps: 1599

Training results

Training Loss Epoch Step Validation Loss
2.1783 0.0625 100 2.1005
1.8424 0.1251 200 1.8531
1.7342 0.1876 300 1.7347
1.6314 0.2502 400 1.6523
1.5815 0.3127 500 1.5977
1.5495 0.3752 600 1.5601
1.5015 0.4378 700 1.5319
1.4848 0.5003 800 1.5099
1.4606 0.5629 900 1.4929
1.4478 0.6254 1000 1.4813
1.4428 0.6879 1100 1.4717
1.4248 0.7505 1200 1.4657
1.4218 0.8130 1300 1.4614
1.418 0.8755 1400 1.4588
1.4238 0.9381 1500 1.4578

Framework versions

  • PEFT 0.13.2
  • Transformers 4.45.2
  • Pytorch 2.5.0
  • Datasets 3.0.1
  • Tokenizers 0.20.1