NanQiangHF commited on
Commit
57a17b6
1 Parent(s): c44ad56

llama3.1_8b_dpo_bwgenerator_test

Browse files
Files changed (3) hide show
  1. README.md +23 -23
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -18,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.0900
22
- - Rewards/chosen: -9.2458
23
- - Rewards/rejected: -18.5064
24
- - Rewards/accuracies: 0.9799
25
- - Rewards/margins: 9.2605
26
- - Logps/rejected: -295.2113
27
- - Logps/chosen: -177.0069
28
- - Logits/rejected: -1.0648
29
- - Logits/chosen: -1.6755
30
 
31
  ## Model description
32
 
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - learning_rate: 5e-06
49
  - train_batch_size: 4
50
  - eval_batch_size: 4
51
  - seed: 42
@@ -57,19 +57,19 @@ The following hyperparameters were used during training:
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
59
  |:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
60
- | 0.122 | 0.0719 | 1000 | 0.1055 | -6.4463 | -12.7595 | 0.9689 | 6.3132 | -237.7425 | -149.0121 | -1.0551 | -1.6809 |
61
- | 0.1018 | 0.1438 | 2000 | 0.0928 | -8.3841 | -16.7138 | 0.9760 | 8.3297 | -277.2856 | -168.3895 | -1.0613 | -1.6756 |
62
- | 0.0975 | 0.2157 | 3000 | 0.0914 | -9.0349 | -17.9922 | 0.9773 | 8.9574 | -290.0698 | -174.8974 | -1.0675 | -1.6787 |
63
- | 0.0861 | 0.2876 | 4000 | 0.0911 | -9.1503 | -18.2788 | 0.9786 | 9.1285 | -292.9356 | -176.0516 | -1.0649 | -1.6760 |
64
- | 0.0957 | 0.3595 | 5000 | 0.0904 | -9.2383 | -18.4646 | 0.9786 | 9.2263 | -294.7940 | -176.9318 | -1.0621 | -1.6732 |
65
- | 0.079 | 0.4313 | 6000 | 0.0900 | -9.1569 | -18.3683 | 0.9806 | 9.2114 | -293.8309 | -176.1181 | -1.0645 | -1.6758 |
66
- | 0.0692 | 0.5032 | 7000 | 0.0901 | -9.2211 | -18.4391 | 0.9802 | 9.2179 | -294.5381 | -176.7600 | -1.0652 | -1.6760 |
67
- | 0.0931 | 0.5751 | 8000 | 0.0901 | -9.2306 | -18.4876 | 0.9802 | 9.2570 | -295.0236 | -176.8544 | -1.0630 | -1.6740 |
68
- | 0.0863 | 0.6470 | 9000 | 0.0902 | -9.2159 | -18.4436 | 0.9799 | 9.2277 | -294.5839 | -176.7078 | -1.0635 | -1.6746 |
69
- | 0.0942 | 0.7189 | 10000 | 0.0902 | -9.1872 | -18.4035 | 0.9802 | 9.2163 | -294.1824 | -176.4204 | -1.0647 | -1.6760 |
70
- | 0.0771 | 0.7908 | 11000 | 0.0902 | -9.2250 | -18.4541 | 0.9796 | 9.2290 | -294.6884 | -176.7990 | -1.0629 | -1.6739 |
71
- | 0.0916 | 0.8627 | 12000 | 0.0903 | -9.2340 | -18.4770 | 0.9799 | 9.2430 | -294.9172 | -176.8884 | -1.0633 | -1.6744 |
72
- | 0.0999 | 0.9346 | 13000 | 0.0900 | -9.2458 | -18.5064 | 0.9799 | 9.2605 | -295.2113 | -177.0069 | -1.0648 | -1.6755 |
73
 
74
 
75
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0325
22
+ - Rewards/chosen: -8.0882
23
+ - Rewards/rejected: -39.4615
24
+ - Rewards/accuracies: 0.9958
25
+ - Rewards/margins: 31.3733
26
+ - Logps/rejected: -504.7621
27
+ - Logps/chosen: -165.4306
28
+ - Logits/rejected: -1.1893
29
+ - Logits/chosen: -1.7730
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 5e-05
49
  - train_batch_size: 4
50
  - eval_batch_size: 4
51
  - seed: 42
 
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
59
  |:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
60
+ | 0.0854 | 0.0719 | 1000 | 0.1058 | -28.5182 | -64.6284 | 0.9929 | 36.1101 | -756.4312 | -369.7310 | -1.1763 | -1.7541 |
61
+ | 0.078 | 0.1438 | 2000 | 0.0582 | -16.5113 | -45.2514 | 0.9938 | 28.7401 | -562.6614 | -249.6615 | -1.1262 | -1.7216 |
62
+ | 0.0458 | 0.2157 | 3000 | 0.0506 | -12.8337 | -41.3538 | 0.9942 | 28.5201 | -523.6852 | -212.8855 | -1.3210 | -1.8884 |
63
+ | 0.0295 | 0.2876 | 4000 | 0.0534 | -12.7034 | -45.1669 | 0.9942 | 32.4635 | -561.8164 | -211.5826 | -1.2303 | -1.8040 |
64
+ | 0.0442 | 0.3595 | 5000 | 0.0428 | -10.9032 | -42.1320 | 0.9955 | 31.2288 | -531.4679 | -193.5811 | -1.2327 | -1.8028 |
65
+ | 0.0329 | 0.4313 | 6000 | 0.0365 | -8.5207 | -36.8790 | 0.9951 | 28.3583 | -478.9377 | -169.7559 | -1.2024 | -1.7841 |
66
+ | 0.0384 | 0.5032 | 7000 | 0.0418 | -12.1405 | -46.4364 | 0.9955 | 34.2959 | -574.5117 | -205.9535 | -1.1646 | -1.7549 |
67
+ | 0.0596 | 0.5751 | 8000 | 0.0344 | -8.7801 | -39.5544 | 0.9951 | 30.7743 | -505.6917 | -172.3499 | -1.2145 | -1.7970 |
68
+ | 0.0437 | 0.6470 | 9000 | 0.0347 | -9.4417 | -41.5833 | 0.9955 | 32.1416 | -525.9807 | -178.9660 | -1.1796 | -1.7709 |
69
+ | 0.0203 | 0.7189 | 10000 | 0.0357 | -9.3723 | -41.8496 | 0.9951 | 32.4773 | -528.6439 | -178.2718 | -1.1694 | -1.7593 |
70
+ | 0.0257 | 0.7908 | 11000 | 0.0347 | -8.6569 | -40.6073 | 0.9961 | 31.9505 | -516.2208 | -171.1173 | -1.1821 | -1.7676 |
71
+ | 0.0355 | 0.8627 | 12000 | 0.0332 | -8.4060 | -40.1402 | 0.9964 | 31.7342 | -511.5494 | -168.6083 | -1.1878 | -1.7722 |
72
+ | 0.0553 | 0.9346 | 13000 | 0.0325 | -8.0882 | -39.4615 | 0.9958 | 31.3733 | -504.7621 | -165.4306 | -1.1893 | -1.7730 |
73
 
74
 
75
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cd2be6dacec1990ba4282386a6e9a6a706726002f06e7e21bebc97fb5a6d2806
3
  size 6832728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10d098a97a18267cc65b9e00e62471199214813e5747a3dcc9a98b28af738652
3
  size 6832728
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6eb6532f05f74dfd24506264fd2171a59f4782bfeee71427eeb32e03351dda9
3
  size 6008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee3f0bddbbd87f0115c9b55adcef2fcdacc2d3f7ccf96431cb9f6373b224c1e1
3
  size 6008