Gemma2B-LORAfied / README.md
migaraa's picture
Update README.md
56b5cde verified
metadata
license: gemma
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
  - ipex
  - GPU Max 1100
datasets:
  - generator
base_model: google/gemma-2b
model-index:
  - name: Gemma2B-LORAfied
    results: []

Gemma2B-LORAfied

This model is a fine-tuned version of google/gemma-2b on the databricks/databricks-dolly-15k dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0206

Training Hardware

This model was trained using:

  • GPU: Intel(R) Data Center GPU Max 1100
  • CPU: Intel(R) Xeon(R) Platinum 8480+

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • training_steps: 1480

Training results

Training Loss Epoch Step Validation Loss
2.927 1.64 100 2.5783
2.4568 3.28 200 2.2983
2.2609 4.92 300 2.1769
2.1671 6.56 400 2.1051
2.1065 8.2 500 2.0739
2.0844 9.84 600 2.0567
2.0643 11.48 700 2.0455
2.0511 13.11 800 2.0374
2.0435 14.75 900 2.0318
2.0304 16.39 1000 2.0276
2.0245 18.03 1100 2.0248
2.0247 19.67 1200 2.0228
2.0096 21.31 1300 2.0212
2.0183 22.95 1400 2.0206

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.3
  • Pytorch 2.0.1a0+cxx11.abi
  • Datasets 2.18.0
  • Tokenizers 0.15.2