Commit
•
e4b8a67
1
Parent(s):
b854177
End of training
Browse files
README.md
CHANGED
@@ -18,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
-
- Loss: 0.
|
22 |
-
- Rewards/chosen: 0.
|
23 |
-
- Rewards/rejected: -2.
|
24 |
- Rewards/accuracies: 1.0
|
25 |
-
- Rewards/margins:
|
26 |
-
- Logps/rejected: -
|
27 |
-
- Logps/chosen: -
|
28 |
-
- Logits/rejected: -1.
|
29 |
-
- Logits/chosen: -0.
|
30 |
|
31 |
## Model description
|
32 |
|
@@ -45,7 +45,7 @@ More information needed
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
-
- learning_rate:
|
49 |
- train_batch_size: 1
|
50 |
- eval_batch_size: 1
|
51 |
- seed: 42
|
@@ -59,26 +59,26 @@ The following hyperparameters were used during training:
|
|
59 |
|
60 |
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
61 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
62 |
-
| 0.
|
63 |
-
| 0.
|
64 |
-
| 0.
|
65 |
-
| 0.
|
66 |
-
| 0.
|
67 |
-
| 0.
|
68 |
-
| 0.
|
69 |
-
| 0.
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
-
| 0.
|
78 |
-
| 0.
|
79 |
-
| 0.
|
80 |
-
| 0.
|
81 |
-
| 0.
|
82 |
|
83 |
|
84 |
### Framework versions
|
|
|
18 |
|
19 |
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.0587
|
22 |
+
- Rewards/chosen: 0.4885
|
23 |
+
- Rewards/rejected: -2.5446
|
24 |
- Rewards/accuracies: 1.0
|
25 |
+
- Rewards/margins: 3.0331
|
26 |
+
- Logps/rejected: -210.2559
|
27 |
+
- Logps/chosen: -155.7489
|
28 |
+
- Logits/rejected: -1.0525
|
29 |
+
- Logits/chosen: -0.8603
|
30 |
|
31 |
## Model description
|
32 |
|
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
+
- learning_rate: 1e-06
|
49 |
- train_batch_size: 1
|
50 |
- eval_batch_size: 1
|
51 |
- seed: 42
|
|
|
59 |
|
60 |
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
61 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
62 |
+
| 0.6664 | 0.1 | 25 | 0.6240 | 0.0413 | -0.1038 | 0.9633 | 0.1451 | -185.8477 | -160.2207 | -1.0521 | -0.8552 |
|
63 |
+
| 0.5275 | 0.2 | 50 | 0.4961 | 0.1194 | -0.3323 | 1.0 | 0.4517 | -188.1325 | -159.4397 | -1.0520 | -0.8543 |
|
64 |
+
| 0.4242 | 0.3 | 75 | 0.3772 | 0.1960 | -0.6107 | 1.0 | 0.8067 | -190.9165 | -158.6736 | -1.0530 | -0.8585 |
|
65 |
+
| 0.3194 | 0.4 | 100 | 0.2809 | 0.2609 | -0.9146 | 1.0 | 1.1755 | -193.9560 | -158.0250 | -1.0526 | -0.8576 |
|
66 |
+
| 0.2569 | 0.5 | 125 | 0.2098 | 0.3243 | -1.2033 | 1.0 | 1.5276 | -196.8424 | -157.3911 | -1.0523 | -0.8568 |
|
67 |
+
| 0.1815 | 0.6 | 150 | 0.1591 | 0.3689 | -1.4935 | 1.0 | 1.8624 | -199.7451 | -156.9453 | -1.0527 | -0.8590 |
|
68 |
+
| 0.1488 | 0.7 | 175 | 0.1233 | 0.4109 | -1.7538 | 1.0 | 2.1647 | -202.3471 | -156.5246 | -1.0528 | -0.8590 |
|
69 |
+
| 0.1097 | 0.79 | 200 | 0.0966 | 0.4448 | -2.0010 | 1.0 | 2.4458 | -204.8196 | -156.1859 | -1.0531 | -0.8595 |
|
70 |
+
| 0.0925 | 0.89 | 225 | 0.0804 | 0.4615 | -2.1974 | 1.0 | 2.6589 | -206.7837 | -156.0186 | -1.0534 | -0.8616 |
|
71 |
+
| 0.0748 | 0.99 | 250 | 0.0707 | 0.4708 | -2.3440 | 1.0 | 2.8148 | -208.2495 | -155.9261 | -1.0526 | -0.8606 |
|
72 |
+
| 0.0717 | 1.09 | 275 | 0.0649 | 0.4788 | -2.4354 | 1.0 | 2.9142 | -209.1637 | -155.8455 | -1.0523 | -0.8600 |
|
73 |
+
| 0.057 | 1.19 | 300 | 0.0616 | 0.4820 | -2.4896 | 1.0 | 2.9716 | -209.7052 | -155.8138 | -1.0532 | -0.8609 |
|
74 |
+
| 0.0543 | 1.29 | 325 | 0.0598 | 0.4864 | -2.5199 | 1.0 | 3.0064 | -210.0089 | -155.7695 | -1.0522 | -0.8598 |
|
75 |
+
| 0.0634 | 1.39 | 350 | 0.0591 | 0.4873 | -2.5345 | 1.0 | 3.0218 | -210.1548 | -155.7612 | -1.0529 | -0.8603 |
|
76 |
+
| 0.0614 | 1.49 | 375 | 0.0584 | 0.4896 | -2.5466 | 1.0 | 3.0362 | -210.2760 | -155.7379 | -1.0528 | -0.8597 |
|
77 |
+
| 0.0543 | 1.59 | 400 | 0.0580 | 0.4918 | -2.5464 | 1.0 | 3.0382 | -210.2738 | -155.7159 | -1.0528 | -0.8597 |
|
78 |
+
| 0.0532 | 1.69 | 425 | 0.0579 | 0.4902 | -2.5495 | 1.0 | 3.0397 | -210.3050 | -155.7321 | -1.0520 | -0.8605 |
|
79 |
+
| 0.0632 | 1.79 | 450 | 0.0577 | 0.4907 | -2.5514 | 1.0 | 3.0422 | -210.3238 | -155.7266 | -1.0522 | -0.8601 |
|
80 |
+
| 0.0596 | 1.89 | 475 | 0.0579 | 0.4923 | -2.5509 | 1.0 | 3.0432 | -210.3188 | -155.7112 | -1.0527 | -0.8614 |
|
81 |
+
| 0.0597 | 1.99 | 500 | 0.0587 | 0.4885 | -2.5446 | 1.0 | 3.0331 | -210.2559 | -155.7489 | -1.0525 | -0.8603 |
|
82 |
|
83 |
|
84 |
### Framework versions
|