llama_2_7b_Magiccoder_evol_downNupNgateNqNkNvNo_r8_lr0.0001_bg88_alpha8_0_41_reverseinit
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.1235
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2502 | 0.0203 | 31 | 1.2151 |
1.1432 | 0.0405 | 62 | 1.1929 |
1.1409 | 0.0608 | 93 | 1.1804 |
1.1494 | 0.0810 | 124 | 1.1749 |
1.1213 | 0.1013 | 155 | 1.1669 |
1.1207 | 0.1215 | 186 | 1.1610 |
1.1488 | 0.1418 | 217 | 1.1596 |
1.1185 | 0.1620 | 248 | 1.1558 |
1.1321 | 0.1823 | 279 | 1.1539 |
1.1031 | 0.2025 | 310 | 1.1509 |
1.0976 | 0.2228 | 341 | 1.1506 |
1.1203 | 0.2431 | 372 | 1.1452 |
1.1118 | 0.2633 | 403 | 1.1472 |
1.1198 | 0.2836 | 434 | 1.1451 |
1.1149 | 0.3038 | 465 | 1.1436 |
1.1028 | 0.3241 | 496 | 1.1390 |
1.1137 | 0.3443 | 527 | 1.1387 |
1.1014 | 0.3646 | 558 | 1.1381 |
1.1078 | 0.3848 | 589 | 1.1378 |
1.0852 | 0.4051 | 620 | 1.1369 |
1.1071 | 0.4254 | 651 | 1.1370 |
1.1182 | 0.4456 | 682 | 1.1350 |
1.102 | 0.4659 | 713 | 1.1343 |
1.104 | 0.4861 | 744 | 1.1336 |
1.0855 | 0.5064 | 775 | 1.1333 |
1.083 | 0.5266 | 806 | 1.1305 |
1.0745 | 0.5469 | 837 | 1.1311 |
1.0763 | 0.5671 | 868 | 1.1295 |
1.0901 | 0.5874 | 899 | 1.1296 |
1.1007 | 0.6076 | 930 | 1.1293 |
1.0832 | 0.6279 | 961 | 1.1286 |
1.0931 | 0.6482 | 992 | 1.1261 |
1.0848 | 0.6684 | 1023 | 1.1264 |
1.1041 | 0.6887 | 1054 | 1.1263 |
1.0906 | 0.7089 | 1085 | 1.1244 |
1.0847 | 0.7292 | 1116 | 1.1257 |
1.0761 | 0.7494 | 1147 | 1.1249 |
1.0949 | 0.7697 | 1178 | 1.1243 |
1.0956 | 0.7899 | 1209 | 1.1240 |
1.0814 | 0.8102 | 1240 | 1.1240 |
1.0919 | 0.8304 | 1271 | 1.1242 |
1.0858 | 0.8507 | 1302 | 1.1240 |
1.0784 | 0.8710 | 1333 | 1.1238 |
1.0816 | 0.8912 | 1364 | 1.1236 |
1.0918 | 0.9115 | 1395 | 1.1233 |
1.1 | 0.9317 | 1426 | 1.1235 |
1.0551 | 0.9520 | 1457 | 1.1234 |
1.0643 | 0.9722 | 1488 | 1.1235 |
1.0921 | 0.9925 | 1519 | 1.1235 |
Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.16.1
- Tokenizers 0.19.1
- Downloads last month
- 2
Unable to determine this model’s pipeline type. Check the
docs
.