working / README.md
Narkantak's picture
Narkantak/phi3-Intent-entity-Classifier-Ashu
993656f
|
raw
history blame
No virus
2.74 kB
metadata
license: mit
library_name: peft
tags:
  - generated_from_trainer
base_model: microsoft/Phi-3-mini-128k-instruct
model-index:
  - name: working
    results: []

working

This model is a fine-tuned version of microsoft/Phi-3-mini-128k-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5929

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
3.5322 0.86 3 2.4221
1.7321 2.0 7 1.6896
1.8073 2.86 10 1.3296
0.9839 4.0 14 0.8705
0.8891 4.86 17 0.6266
0.4628 6.0 21 0.4525
0.498 6.86 24 0.4093
0.3318 8.0 28 0.3812
0.396 8.86 31 0.3742
0.2809 10.0 35 0.3603
0.3487 10.86 38 0.3563
0.2479 12.0 42 0.3621
0.3085 12.86 45 0.3734
0.2225 14.0 49 0.3733
0.2716 14.86 52 0.3888
0.1899 16.0 56 0.4287
0.2319 16.86 59 0.4375
0.1594 18.0 63 0.4491
0.1928 18.86 66 0.4811
0.1307 20.0 70 0.5047
0.1577 20.86 73 0.5184
0.1077 22.0 77 0.5539
0.1333 22.86 80 0.5708
0.0922 24.0 84 0.5795
0.1167 24.86 87 0.5875
0.0818 25.71 90 0.5929

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2