thorirhrafn commited on
Commit
56455bc
1 Parent(s): 073f44f

End of training

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -18,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0587
22
- - Rewards/chosen: 0.4885
23
- - Rewards/rejected: -2.5446
24
  - Rewards/accuracies: 1.0
25
- - Rewards/margins: 3.0331
26
- - Logps/rejected: -210.2559
27
- - Logps/chosen: -155.7489
28
- - Logits/rejected: -1.0525
29
- - Logits/chosen: -0.8603
30
 
31
  ## Model description
32
 
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 1e-06
49
  - train_batch_size: 1
50
  - eval_batch_size: 1
51
  - seed: 42
@@ -59,26 +59,26 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
- | 0.6664 | 0.1 | 25 | 0.6240 | 0.0413 | -0.1038 | 0.9633 | 0.1451 | -185.8477 | -160.2207 | -1.0521 | -0.8552 |
63
- | 0.5275 | 0.2 | 50 | 0.4961 | 0.1194 | -0.3323 | 1.0 | 0.4517 | -188.1325 | -159.4397 | -1.0520 | -0.8543 |
64
- | 0.4242 | 0.3 | 75 | 0.3772 | 0.1960 | -0.6107 | 1.0 | 0.8067 | -190.9165 | -158.6736 | -1.0530 | -0.8585 |
65
- | 0.3194 | 0.4 | 100 | 0.2809 | 0.2609 | -0.9146 | 1.0 | 1.1755 | -193.9560 | -158.0250 | -1.0526 | -0.8576 |
66
- | 0.2569 | 0.5 | 125 | 0.2098 | 0.3243 | -1.2033 | 1.0 | 1.5276 | -196.8424 | -157.3911 | -1.0523 | -0.8568 |
67
- | 0.1815 | 0.6 | 150 | 0.1591 | 0.3689 | -1.4935 | 1.0 | 1.8624 | -199.7451 | -156.9453 | -1.0527 | -0.8590 |
68
- | 0.1488 | 0.7 | 175 | 0.1233 | 0.4109 | -1.7538 | 1.0 | 2.1647 | -202.3471 | -156.5246 | -1.0528 | -0.8590 |
69
- | 0.1097 | 0.79 | 200 | 0.0966 | 0.4448 | -2.0010 | 1.0 | 2.4458 | -204.8196 | -156.1859 | -1.0531 | -0.8595 |
70
- | 0.0925 | 0.89 | 225 | 0.0804 | 0.4615 | -2.1974 | 1.0 | 2.6589 | -206.7837 | -156.0186 | -1.0534 | -0.8616 |
71
- | 0.0748 | 0.99 | 250 | 0.0707 | 0.4708 | -2.3440 | 1.0 | 2.8148 | -208.2495 | -155.9261 | -1.0526 | -0.8606 |
72
- | 0.0717 | 1.09 | 275 | 0.0649 | 0.4788 | -2.4354 | 1.0 | 2.9142 | -209.1637 | -155.8455 | -1.0523 | -0.8600 |
73
- | 0.057 | 1.19 | 300 | 0.0616 | 0.4820 | -2.4896 | 1.0 | 2.9716 | -209.7052 | -155.8138 | -1.0532 | -0.8609 |
74
- | 0.0543 | 1.29 | 325 | 0.0598 | 0.4864 | -2.5199 | 1.0 | 3.0064 | -210.0089 | -155.7695 | -1.0522 | -0.8598 |
75
- | 0.0634 | 1.39 | 350 | 0.0591 | 0.4873 | -2.5345 | 1.0 | 3.0218 | -210.1548 | -155.7612 | -1.0529 | -0.8603 |
76
- | 0.0614 | 1.49 | 375 | 0.0584 | 0.4896 | -2.5466 | 1.0 | 3.0362 | -210.2760 | -155.7379 | -1.0528 | -0.8597 |
77
- | 0.0543 | 1.59 | 400 | 0.0580 | 0.4918 | -2.5464 | 1.0 | 3.0382 | -210.2738 | -155.7159 | -1.0528 | -0.8597 |
78
- | 0.0532 | 1.69 | 425 | 0.0579 | 0.4902 | -2.5495 | 1.0 | 3.0397 | -210.3050 | -155.7321 | -1.0520 | -0.8605 |
79
- | 0.0632 | 1.79 | 450 | 0.0577 | 0.4907 | -2.5514 | 1.0 | 3.0422 | -210.3238 | -155.7266 | -1.0522 | -0.8601 |
80
- | 0.0596 | 1.89 | 475 | 0.0579 | 0.4923 | -2.5509 | 1.0 | 3.0432 | -210.3188 | -155.7112 | -1.0527 | -0.8614 |
81
- | 0.0597 | 1.99 | 500 | 0.0587 | 0.4885 | -2.5446 | 1.0 | 3.0331 | -210.2559 | -155.7489 | -1.0525 | -0.8603 |
82
 
83
 
84
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1205
22
+ - Rewards/chosen: 0.4005
23
+ - Rewards/rejected: -1.7841
24
  - Rewards/accuracies: 1.0
25
+ - Rewards/margins: 2.1847
26
+ - Logps/rejected: -202.6509
27
+ - Logps/chosen: -156.6288
28
+ - Logits/rejected: -1.0515
29
+ - Logits/chosen: -0.8581
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 7e-07
49
  - train_batch_size: 1
50
  - eval_batch_size: 1
51
  - seed: 42
 
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
+ | 0.6753 | 0.1 | 25 | 0.6561 | 0.0241 | -0.0529 | 0.8800 | 0.0770 | -185.3385 | -160.3932 | -1.0518 | -0.8547 |
63
+ | 0.596 | 0.2 | 50 | 0.5763 | 0.0663 | -0.1863 | 0.9933 | 0.2525 | -186.6722 | -159.9714 | -1.0527 | -0.8563 |
64
+ | 0.5265 | 0.3 | 75 | 0.4888 | 0.1230 | -0.3480 | 1.0 | 0.4710 | -188.2895 | -159.4043 | -1.0529 | -0.8557 |
65
+ | 0.4405 | 0.4 | 100 | 0.4115 | 0.1711 | -0.5248 | 1.0 | 0.6959 | -190.0574 | -158.9227 | -1.0521 | -0.8557 |
66
+ | 0.3832 | 0.5 | 125 | 0.3418 | 0.2187 | -0.7108 | 1.0 | 0.9295 | -191.9176 | -158.4473 | -1.0530 | -0.8571 |
67
+ | 0.3071 | 0.6 | 150 | 0.2809 | 0.2614 | -0.9143 | 1.0 | 1.1757 | -193.9524 | -158.0195 | -1.0526 | -0.8568 |
68
+ | 0.2635 | 0.7 | 175 | 0.2300 | 0.3051 | -1.1158 | 1.0 | 1.4209 | -195.9679 | -157.5830 | -1.0531 | -0.8575 |
69
+ | 0.2056 | 0.79 | 200 | 0.1912 | 0.3381 | -1.3041 | 1.0 | 1.6422 | -197.8509 | -157.2532 | -1.0529 | -0.8577 |
70
+ | 0.1735 | 0.89 | 225 | 0.1617 | 0.3637 | -1.4760 | 1.0 | 1.8397 | -199.5699 | -156.9968 | -1.0524 | -0.8580 |
71
+ | 0.1492 | 0.99 | 250 | 0.1416 | 0.3797 | -1.6179 | 1.0 | 1.9976 | -200.9889 | -156.8374 | -1.0521 | -0.8575 |
72
+ | 0.144 | 1.09 | 275 | 0.1304 | 0.3918 | -1.6997 | 1.0 | 2.0915 | -201.8062 | -156.7157 | -1.0517 | -0.8590 |
73
+ | 0.1203 | 1.19 | 300 | 0.1255 | 0.3955 | -1.7398 | 1.0 | 2.1353 | -202.2080 | -156.6790 | -1.0514 | -0.8580 |
74
+ | 0.117 | 1.29 | 325 | 0.1229 | 0.3961 | -1.7635 | 1.0 | 2.1596 | -202.4451 | -156.6730 | -1.0514 | -0.8572 |
75
+ | 0.1286 | 1.39 | 350 | 0.1209 | 0.4018 | -1.7766 | 1.0 | 2.1784 | -202.5752 | -156.6156 | -1.0517 | -0.8587 |
76
+ | 0.126 | 1.49 | 375 | 0.1199 | 0.4025 | -1.7866 | 1.0 | 2.1891 | -202.6759 | -156.6091 | -1.0517 | -0.8587 |
77
+ | 0.1154 | 1.59 | 400 | 0.1202 | 0.4013 | -1.7865 | 1.0 | 2.1877 | -202.6743 | -156.6213 | -1.0514 | -0.8580 |
78
+ | 0.1141 | 1.69 | 425 | 0.1200 | 0.3990 | -1.7907 | 1.0 | 2.1897 | -202.7168 | -156.6437 | -1.0518 | -0.8578 |
79
+ | 0.1284 | 1.79 | 450 | 0.1196 | 0.4012 | -1.7899 | 1.0 | 2.1910 | -202.7081 | -156.6221 | -1.0518 | -0.8582 |
80
+ | 0.1225 | 1.89 | 475 | 0.1205 | 0.3984 | -1.7858 | 1.0 | 2.1842 | -202.6674 | -156.6495 | -1.0517 | -0.8592 |
81
+ | 0.1224 | 1.99 | 500 | 0.1205 | 0.4005 | -1.7841 | 1.0 | 2.1847 | -202.6509 | -156.6288 | -1.0515 | -0.8581 |
82
 
83
 
84
  ### Framework versions