gemma_finetune_1.1
This model is a fine-tuned version of google/gemma-1.1-2b-it on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0927
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.1521 | 0.04 | 500 | 0.2275 |
0.5456 | 0.07 | 1000 | 0.2137 |
0.1141 | 0.11 | 1500 | 0.1788 |
0.1028 | 0.14 | 2000 | 0.1829 |
0.1988 | 0.18 | 2500 | 0.1795 |
0.205 | 0.21 | 3000 | 0.1748 |
0.5189 | 0.25 | 3500 | 0.1781 |
0.1346 | 0.29 | 4000 | 0.1574 |
0.0931 | 0.32 | 4500 | 0.1595 |
0.0133 | 0.36 | 5000 | 0.1599 |
0.0104 | 0.39 | 5500 | 0.1499 |
0.1221 | 0.43 | 6000 | 0.1506 |
0.185 | 0.46 | 6500 | 0.1509 |
0.229 | 0.5 | 7000 | 0.1442 |
0.2068 | 0.53 | 7500 | 0.1414 |
0.2058 | 0.57 | 8000 | 0.1333 |
0.0324 | 0.61 | 8500 | 0.1351 |
0.0318 | 0.64 | 9000 | 0.1318 |
0.0483 | 0.68 | 9500 | 0.1281 |
0.5406 | 0.71 | 10000 | 0.1281 |
0.1692 | 0.75 | 10500 | 0.1163 |
0.1522 | 0.78 | 11000 | 0.1223 |
0.087 | 0.82 | 11500 | 0.1321 |
0.101 | 0.86 | 12000 | 0.1165 |
0.0304 | 0.89 | 12500 | 0.1264 |
0.1497 | 0.93 | 13000 | 0.1236 |
0.0305 | 0.96 | 13500 | 0.1201 |
0.1281 | 1.0 | 14000 | 0.1122 |
0.0156 | 1.03 | 14500 | 0.1059 |
0.1083 | 1.07 | 15000 | 0.1114 |
0.0807 | 1.1 | 15500 | 0.1229 |
0.0267 | 1.14 | 16000 | 0.1156 |
0.1179 | 1.18 | 16500 | 0.1215 |
0.1965 | 1.21 | 17000 | 0.1153 |
0.0823 | 1.25 | 17500 | 0.1058 |
0.0079 | 1.28 | 18000 | 0.1142 |
0.0169 | 1.32 | 18500 | 0.1118 |
0.0269 | 1.35 | 19000 | 0.1123 |
0.1517 | 1.39 | 19500 | 0.1188 |
0.2076 | 1.43 | 20000 | 0.1070 |
0.0101 | 1.46 | 20500 | 0.1053 |
0.0771 | 1.5 | 21000 | 0.1158 |
0.0624 | 1.53 | 21500 | 0.1040 |
0.0268 | 1.57 | 22000 | 0.1013 |
0.0156 | 1.6 | 22500 | 0.1069 |
0.0053 | 1.64 | 23000 | 0.1068 |
0.1252 | 1.67 | 23500 | 0.0974 |
0.0383 | 1.71 | 24000 | 0.0930 |
0.0169 | 1.75 | 24500 | 0.0941 |
0.0212 | 1.78 | 25000 | 0.0909 |
0.0069 | 1.82 | 25500 | 0.0901 |
0.0263 | 1.85 | 26000 | 0.0939 |
0.0046 | 1.89 | 26500 | 0.0893 |
0.0649 | 1.92 | 27000 | 0.0894 |
0.0058 | 1.96 | 27500 | 0.0879 |
0.0363 | 2.0 | 28000 | 0.0856 |
0.0055 | 2.03 | 28500 | 0.1035 |
0.0236 | 2.07 | 29000 | 0.1034 |
0.0252 | 2.1 | 29500 | 0.0965 |
0.005 | 2.14 | 30000 | 0.1006 |
0.1731 | 2.17 | 30500 | 0.0989 |
0.1006 | 2.21 | 31000 | 0.0958 |
0.0082 | 2.25 | 31500 | 0.0928 |
0.0051 | 2.28 | 32000 | 0.0966 |
0.0052 | 2.32 | 32500 | 0.0924 |
0.0231 | 2.35 | 33000 | 0.0942 |
0.0131 | 2.39 | 33500 | 0.0913 |
0.0055 | 2.42 | 34000 | 0.0911 |
0.0304 | 2.46 | 34500 | 0.0912 |
0.031 | 2.49 | 35000 | 0.0930 |
0.1609 | 2.53 | 35500 | 0.0927 |
0.0964 | 2.57 | 36000 | 0.0910 |
0.0177 | 2.6 | 36500 | 0.0929 |
0.0052 | 2.64 | 37000 | 0.0953 |
0.0425 | 2.67 | 37500 | 0.0949 |
0.0105 | 2.71 | 38000 | 0.0917 |
0.0045 | 2.74 | 38500 | 0.0932 |
0.0052 | 2.78 | 39000 | 0.0949 |
0.0052 | 2.82 | 39500 | 0.0957 |
0.0047 | 2.85 | 40000 | 0.0944 |
0.0053 | 2.89 | 40500 | 0.0932 |
0.0804 | 2.92 | 41000 | 0.0930 |
0.0089 | 2.96 | 41500 | 0.0926 |
0.0125 | 2.99 | 42000 | 0.0927 |
Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 0
Unable to determine this model’s pipeline type. Check the
docs
.