RylanSchaeffer's picture
End of training
f32b5f9 verified
metadata
license: gemma
base_model: google/gemma-2-9b
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: collapse_gemma-2-9b_hs2_replace_iter1_sftsd2
    results: []

collapse_gemma-2-9b_hs2_replace_iter1_sftsd2

This model is a fine-tuned version of google/gemma-2-9b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9313
  • Num Input Tokens Seen: 5236236

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-06
  • train_batch_size: 4
  • eval_batch_size: 16
  • seed: 2
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
No log 0 0 1.2335 0
1.0584 0.0511 5 1.0644 270332
1.0014 0.1021 10 0.9817 541500
0.9947 0.1532 15 0.9708 812584
0.9594 0.2043 20 0.9624 1083956
0.9482 0.2553 25 0.9565 1356232
0.9359 0.3064 30 0.9534 1618536
0.9857 0.3575 35 0.9502 1882576
0.9566 0.4086 40 0.9469 2154872
0.9121 0.4596 45 0.9447 2423272
0.917 0.5107 50 0.9428 2695312
0.9864 0.5618 55 0.9412 2965956
0.9304 0.6128 60 0.9401 3236124
0.8757 0.6639 65 0.9384 3504696
0.9522 0.7150 70 0.9370 3779560
0.9782 0.7660 75 0.9360 4047404
0.9433 0.8171 80 0.9352 4319468
0.9083 0.8682 85 0.9336 4590728
0.9983 0.9192 90 0.9326 4860344
0.9986 0.9703 95 0.9319 5128848

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1