imdatta0's picture
End of training
8b1b653 verified
|
raw
history blame
3.92 kB
metadata
base_model: mistralai/Mistral-7B-v0.3
library_name: peft
license: apache-2.0
tags:
  - unsloth
  - generated_from_trainer
model-index:
  - name: Mistral-7B-v0.3_pct_ortho_r32
    results: []

Mistral-7B-v0.3_pct_ortho_r32

This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9721

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.9673 0.0206 8 1.9675
1.982 0.0413 16 1.9762
1.9563 0.0619 24 1.9745
1.957 0.0825 32 1.9779
2.0248 0.1032 40 1.9852
1.9753 0.1238 48 1.9988
1.9752 0.1444 56 1.9974
2.0253 0.1651 64 1.9947
2.0073 0.1857 72 1.9872
1.9826 0.2063 80 1.9953
1.9907 0.2270 88 2.0015
1.9795 0.2476 96 1.9951
1.9882 0.2682 104 2.0020
1.9896 0.2889 112 1.9963
2.0177 0.3095 120 2.0146
2.0131 0.3301 128 2.0013
2.0384 0.3508 136 2.0017
2.0587 0.3714 144 2.0019
1.9998 0.3920 152 1.9965
1.9729 0.4127 160 1.9905
2.0339 0.4333 168 2.0233
2.0029 0.4539 176 1.9972
1.997 0.4746 184 1.9976
1.9808 0.4952 192 2.0007
2.0169 0.5158 200 1.9872
1.9605 0.5364 208 1.9975
2.0195 0.5571 216 1.9963
1.9619 0.5777 224 1.9878
1.9361 0.5983 232 2.0045
1.9932 0.6190 240 1.9815
1.9519 0.6396 248 1.9896
1.9843 0.6602 256 1.9901
1.963 0.6809 264 1.9820
1.9376 0.7015 272 1.9793
1.9876 0.7221 280 1.9885
2.0157 0.7428 288 1.9834
2.011 0.7634 296 1.9843
2.0179 0.7840 304 1.9779
1.9693 0.8047 312 1.9787
1.9632 0.8253 320 1.9824
1.9367 0.8459 328 1.9776
1.9824 0.8666 336 1.9730
1.9911 0.8872 344 1.9719
2.0075 0.9078 352 1.9730
1.9809 0.9285 360 1.9730
1.9971 0.9491 368 1.9722
1.9913 0.9697 376 1.9720
1.916 0.9904 384 1.9721

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1