Narkantak's picture
Narkantak/mistral-2x7b-Intent-Classifier-Ashu
49c4948 verified
|
raw
history blame
2.89 kB
metadata
library_name: peft
tags:
  - generated_from_trainer
base_model: Narkantak/Mistral-2x7b-Instruct-1x2
model-index:
  - name: working
    results: []

working

This model is a fine-tuned version of Narkantak/Mistral-2x7b-Instruct-1x2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3777

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
3.0 0.96 12 0.8713
0.5738 2.0 25 0.4515
0.3676 2.96 37 0.3039
0.2127 4.0 50 0.2680
0.1891 4.96 62 0.2556
0.1427 6.0 75 0.2721
0.132 6.96 87 0.2762
0.1037 8.0 100 0.2844
0.1006 8.96 112 0.2951
0.0832 10.0 125 0.3044
0.0829 10.96 137 0.3194
0.073 12.0 150 0.3282
0.0757 12.96 162 0.3415
0.0692 14.0 175 0.3230
0.0738 14.96 187 0.3427
0.0666 16.0 200 0.3449
0.0712 16.96 212 0.3450
0.0652 18.0 225 0.3511
0.0698 18.96 237 0.3570
0.0641 20.0 250 0.3604
0.0691 20.96 262 0.3662
0.0638 22.0 275 0.3682
0.0689 22.96 287 0.3644
0.0632 24.0 300 0.3665
0.0681 24.96 312 0.3709
0.0627 26.0 325 0.3725
0.0677 26.96 337 0.3752
0.0621 28.0 350 0.3774
0.0645 28.8 360 0.3777

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2