Visualize in Weights & Biases

0721_211205-google-gemma-2b

This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0008
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.1742 5.0 5 0.3139 1.0
0.101 10.0 10 0.1813 1.0
0.0568 15.0 15 0.0998 1.0
0.0294 20.0 20 0.0514 1.0
0.0145 25.0 25 0.0252 1.0
0.0071 30.0 30 0.0127 1.0
0.0039 35.0 35 0.0070 1.0
0.0024 40.0 40 0.0042 1.0
0.0016 45.0 45 0.0028 1.0
0.0011 50.0 50 0.0021 1.0
0.0009 55.0 55 0.0016 1.0
0.0007 60.0 60 0.0014 1.0
0.0006 65.0 65 0.0012 1.0
0.0005 70.0 70 0.0011 1.0
0.0005 75.0 75 0.0010 1.0
0.0005 80.0 80 0.0009 1.0
0.0005 85.0 85 0.0009 1.0
0.0004 90.0 90 0.0009 1.0
0.0004 95.0 95 0.0008 1.0
0.0004 100.0 100 0.0008 1.0

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for steve-sli/0721_211205-google-gemma-2b

Base model

google/gemma-2b
Adapter
(23455)
this model