thorirhrafn commited on
Commit
e4b8a67
1 Parent(s): b854177

End of training

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -18,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0896
22
- - Rewards/chosen: 0.4401
23
- - Rewards/rejected: -2.0930
24
  - Rewards/accuracies: 1.0
25
- - Rewards/margins: 2.5330
26
- - Logps/rejected: -205.7391
27
- - Logps/chosen: -156.2334
28
- - Logits/rejected: -1.0514
29
- - Logits/chosen: -0.8587
30
 
31
  ## Model description
32
 
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 8e-07
49
  - train_batch_size: 1
50
  - eval_batch_size: 1
51
  - seed: 42
@@ -59,26 +59,26 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
- | 0.6699 | 0.1 | 25 | 0.6428 | 0.0307 | -0.0744 | 0.9033 | 0.1051 | -185.5532 | -160.3267 | -1.0520 | -0.8550 |
63
- | 0.5702 | 0.2 | 50 | 0.5471 | 0.0866 | -0.2359 | 0.9933 | 0.3225 | -187.1690 | -159.7680 | -1.0514 | -0.8544 |
64
- | 0.488 | 0.3 | 75 | 0.4456 | 0.1502 | -0.4424 | 1.0 | 0.5926 | -189.2334 | -159.1314 | -1.0527 | -0.8555 |
65
- | 0.3957 | 0.4 | 100 | 0.3600 | 0.2054 | -0.6615 | 1.0 | 0.8669 | -191.4245 | -158.5795 | -1.0530 | -0.8577 |
66
- | 0.3338 | 0.5 | 125 | 0.2865 | 0.2569 | -0.8933 | 1.0 | 1.1502 | -193.7425 | -158.0646 | -1.0524 | -0.8564 |
67
- | 0.253 | 0.6 | 150 | 0.2257 | 0.3043 | -1.1373 | 1.0 | 1.4416 | -196.1830 | -157.5914 | -1.0523 | -0.8570 |
68
- | 0.2134 | 0.7 | 175 | 0.1819 | 0.3496 | -1.3537 | 1.0 | 1.7033 | -198.3466 | -157.1379 | -1.0530 | -0.8584 |
69
- | 0.1613 | 0.79 | 200 | 0.1473 | 0.3842 | -1.5693 | 1.0 | 1.9535 | -200.5027 | -156.7917 | -1.0525 | -0.8591 |
70
- | 0.1358 | 0.89 | 225 | 0.1231 | 0.4031 | -1.7582 | 1.0 | 2.1614 | -202.3919 | -156.6024 | -1.0523 | -0.8593 |
71
- | 0.115 | 0.99 | 250 | 0.1076 | 0.4205 | -1.8980 | 1.0 | 2.3185 | -203.7897 | -156.4292 | -1.0521 | -0.8590 |
72
- | 0.1111 | 1.09 | 275 | 0.0989 | 0.4291 | -1.9856 | 1.0 | 2.4148 | -204.6660 | -156.3426 | -1.0515 | -0.8591 |
73
- | 0.0902 | 1.19 | 300 | 0.0949 | 0.4280 | -2.0337 | 1.0 | 2.4617 | -205.1465 | -156.3540 | -1.0507 | -0.8576 |
74
- | 0.0867 | 1.29 | 325 | 0.0920 | 0.4325 | -2.0705 | 1.0 | 2.5030 | -205.5146 | -156.3087 | -1.0510 | -0.8576 |
75
- | 0.0973 | 1.39 | 350 | 0.0905 | 0.4357 | -2.0839 | 1.0 | 2.5196 | -205.6485 | -156.2766 | -1.0506 | -0.8576 |
76
- | 0.0942 | 1.49 | 375 | 0.0897 | 0.4422 | -2.0838 | 1.0 | 2.5260 | -205.6476 | -156.2122 | -1.0515 | -0.8578 |
77
- | 0.0858 | 1.59 | 400 | 0.0897 | 0.4392 | -2.0903 | 1.0 | 2.5295 | -205.7121 | -156.2415 | -1.0515 | -0.8587 |
78
- | 0.083 | 1.69 | 425 | 0.0893 | 0.4401 | -2.0972 | 1.0 | 2.5373 | -205.7811 | -156.2327 | -1.0511 | -0.8584 |
79
- | 0.0964 | 1.79 | 450 | 0.0897 | 0.4368 | -2.0947 | 1.0 | 2.5315 | -205.7564 | -156.2662 | -1.0511 | -0.8577 |
80
- | 0.0931 | 1.89 | 475 | 0.0890 | 0.4406 | -2.0970 | 1.0 | 2.5376 | -205.7794 | -156.2282 | -1.0512 | -0.8585 |
81
- | 0.0915 | 1.99 | 500 | 0.0896 | 0.4401 | -2.0930 | 1.0 | 2.5330 | -205.7391 | -156.2334 | -1.0514 | -0.8587 |
82
 
83
 
84
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0587
22
+ - Rewards/chosen: 0.4885
23
+ - Rewards/rejected: -2.5446
24
  - Rewards/accuracies: 1.0
25
+ - Rewards/margins: 3.0331
26
+ - Logps/rejected: -210.2559
27
+ - Logps/chosen: -155.7489
28
+ - Logits/rejected: -1.0525
29
+ - Logits/chosen: -0.8603
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 1e-06
49
  - train_batch_size: 1
50
  - eval_batch_size: 1
51
  - seed: 42
 
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
+ | 0.6664 | 0.1 | 25 | 0.6240 | 0.0413 | -0.1038 | 0.9633 | 0.1451 | -185.8477 | -160.2207 | -1.0521 | -0.8552 |
63
+ | 0.5275 | 0.2 | 50 | 0.4961 | 0.1194 | -0.3323 | 1.0 | 0.4517 | -188.1325 | -159.4397 | -1.0520 | -0.8543 |
64
+ | 0.4242 | 0.3 | 75 | 0.3772 | 0.1960 | -0.6107 | 1.0 | 0.8067 | -190.9165 | -158.6736 | -1.0530 | -0.8585 |
65
+ | 0.3194 | 0.4 | 100 | 0.2809 | 0.2609 | -0.9146 | 1.0 | 1.1755 | -193.9560 | -158.0250 | -1.0526 | -0.8576 |
66
+ | 0.2569 | 0.5 | 125 | 0.2098 | 0.3243 | -1.2033 | 1.0 | 1.5276 | -196.8424 | -157.3911 | -1.0523 | -0.8568 |
67
+ | 0.1815 | 0.6 | 150 | 0.1591 | 0.3689 | -1.4935 | 1.0 | 1.8624 | -199.7451 | -156.9453 | -1.0527 | -0.8590 |
68
+ | 0.1488 | 0.7 | 175 | 0.1233 | 0.4109 | -1.7538 | 1.0 | 2.1647 | -202.3471 | -156.5246 | -1.0528 | -0.8590 |
69
+ | 0.1097 | 0.79 | 200 | 0.0966 | 0.4448 | -2.0010 | 1.0 | 2.4458 | -204.8196 | -156.1859 | -1.0531 | -0.8595 |
70
+ | 0.0925 | 0.89 | 225 | 0.0804 | 0.4615 | -2.1974 | 1.0 | 2.6589 | -206.7837 | -156.0186 | -1.0534 | -0.8616 |
71
+ | 0.0748 | 0.99 | 250 | 0.0707 | 0.4708 | -2.3440 | 1.0 | 2.8148 | -208.2495 | -155.9261 | -1.0526 | -0.8606 |
72
+ | 0.0717 | 1.09 | 275 | 0.0649 | 0.4788 | -2.4354 | 1.0 | 2.9142 | -209.1637 | -155.8455 | -1.0523 | -0.8600 |
73
+ | 0.057 | 1.19 | 300 | 0.0616 | 0.4820 | -2.4896 | 1.0 | 2.9716 | -209.7052 | -155.8138 | -1.0532 | -0.8609 |
74
+ | 0.0543 | 1.29 | 325 | 0.0598 | 0.4864 | -2.5199 | 1.0 | 3.0064 | -210.0089 | -155.7695 | -1.0522 | -0.8598 |
75
+ | 0.0634 | 1.39 | 350 | 0.0591 | 0.4873 | -2.5345 | 1.0 | 3.0218 | -210.1548 | -155.7612 | -1.0529 | -0.8603 |
76
+ | 0.0614 | 1.49 | 375 | 0.0584 | 0.4896 | -2.5466 | 1.0 | 3.0362 | -210.2760 | -155.7379 | -1.0528 | -0.8597 |
77
+ | 0.0543 | 1.59 | 400 | 0.0580 | 0.4918 | -2.5464 | 1.0 | 3.0382 | -210.2738 | -155.7159 | -1.0528 | -0.8597 |
78
+ | 0.0532 | 1.69 | 425 | 0.0579 | 0.4902 | -2.5495 | 1.0 | 3.0397 | -210.3050 | -155.7321 | -1.0520 | -0.8605 |
79
+ | 0.0632 | 1.79 | 450 | 0.0577 | 0.4907 | -2.5514 | 1.0 | 3.0422 | -210.3238 | -155.7266 | -1.0522 | -0.8601 |
80
+ | 0.0596 | 1.89 | 475 | 0.0579 | 0.4923 | -2.5509 | 1.0 | 3.0432 | -210.3188 | -155.7112 | -1.0527 | -0.8614 |
81
+ | 0.0597 | 1.99 | 500 | 0.0587 | 0.4885 | -2.5446 | 1.0 | 3.0331 | -210.2559 | -155.7489 | -1.0525 | -0.8603 |
82
 
83
 
84
  ### Framework versions