Model save
Browse files- README.md +25 -29
- all_results.json +5 -6
- generation_config.json +1 -1
- model-00001-of-00003.safetensors +1 -1
- model-00002-of-00003.safetensors +1 -1
- model-00003-of-00003.safetensors +1 -1
- train_results.json +5 -6
- trainer_state.json +0 -0
README.md
CHANGED
@@ -13,23 +13,19 @@ model-index:
|
|
13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
16 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanqiang/wdpo/runs/a3szju9y)
|
17 |
# zephyr-7b-dpo-full
|
18 |
|
19 |
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
-
- Loss: 0.
|
22 |
-
- Rewards/chosen: -
|
23 |
-
- Rewards/rejected: -
|
24 |
-
- Rewards/accuracies: 0.
|
25 |
-
- Rewards/margins: 0.
|
26 |
-
- Logps/rejected: -
|
27 |
-
- Logps/chosen: -
|
28 |
-
- Logits/rejected:
|
29 |
-
- Logits/chosen:
|
30 |
-
- Debug/policy Weights: 0.0530
|
31 |
-
- Debug/losses: 0.0296
|
32 |
-
- Debug/raw Losses: 0.5668
|
33 |
|
34 |
## Model description
|
35 |
|
@@ -64,25 +60,25 @@ The following hyperparameters were used during training:
|
|
64 |
|
65 |
### Training results
|
66 |
|
67 |
-
| Training Loss | Epoch
|
68 |
-
|
69 |
-
| 0.
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
-
| 0.
|
78 |
-
| 0.
|
79 |
-
| 0.
|
80 |
-
| 0.
|
81 |
|
82 |
|
83 |
### Framework versions
|
84 |
|
85 |
-
- Transformers 4.
|
86 |
- Pytorch 2.1.2+cu121
|
87 |
- Datasets 2.14.6
|
88 |
-
- Tokenizers 0.
|
|
|
13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
|
|
16 |
# zephyr-7b-dpo-full
|
17 |
|
18 |
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 0.5440
|
21 |
+
- Rewards/chosen: -2.2940
|
22 |
+
- Rewards/rejected: -3.0054
|
23 |
+
- Rewards/accuracies: 0.7090
|
24 |
+
- Rewards/margins: 0.7114
|
25 |
+
- Logps/rejected: -451.6765
|
26 |
+
- Logps/chosen: -373.9785
|
27 |
+
- Logits/rejected: 0.3244
|
28 |
+
- Logits/chosen: 0.0742
|
|
|
|
|
|
|
29 |
|
30 |
## Model description
|
31 |
|
|
|
60 |
|
61 |
### Training results
|
62 |
|
63 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
64 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
65 |
+
| 0.6789 | 0.08 | 100 | 0.6770 | -0.1062 | -0.1422 | 0.5914 | 0.0360 | -165.3552 | -155.1927 | -2.7255 | -2.7337 |
|
66 |
+
| 0.6062 | 0.16 | 200 | 0.6079 | -1.0212 | -1.3873 | 0.6670 | 0.3660 | -289.8622 | -246.6971 | -2.3696 | -2.3856 |
|
67 |
+
| 0.5965 | 0.24 | 300 | 0.5907 | -1.3779 | -1.8008 | 0.6623 | 0.4229 | -331.2100 | -282.3621 | -2.2450 | -2.2656 |
|
68 |
+
| 0.5729 | 0.32 | 400 | 0.5711 | -1.6763 | -2.2404 | 0.6828 | 0.5640 | -375.1720 | -312.2064 | -1.2920 | -1.3760 |
|
69 |
+
| 0.5645 | 0.4 | 500 | 0.5639 | -2.0721 | -2.6869 | 0.6987 | 0.6147 | -419.8194 | -351.7883 | -0.6091 | -0.7860 |
|
70 |
+
| 0.5513 | 0.48 | 600 | 0.5582 | -2.9237 | -3.5389 | 0.7108 | 0.6152 | -505.0223 | -436.9386 | 0.1224 | -0.1054 |
|
71 |
+
| 0.5571 | 0.56 | 700 | 0.5559 | -2.7971 | -3.5456 | 0.7043 | 0.7485 | -505.6961 | -424.2823 | 0.2980 | 0.0356 |
|
72 |
+
| 0.5609 | 0.64 | 800 | 0.5469 | -2.4314 | -3.0831 | 0.7108 | 0.6517 | -459.4439 | -387.7092 | 0.1922 | -0.0312 |
|
73 |
+
| 0.5514 | 0.72 | 900 | 0.5474 | -2.4774 | -3.2082 | 0.6996 | 0.7308 | -471.9533 | -392.3096 | 0.5382 | 0.2860 |
|
74 |
+
| 0.527 | 0.8 | 1000 | 0.5454 | -2.5040 | -3.2071 | 0.7080 | 0.7031 | -471.8454 | -394.9711 | 0.6372 | 0.3871 |
|
75 |
+
| 0.5487 | 0.88 | 1100 | 0.5444 | -2.2851 | -2.9963 | 0.7090 | 0.7112 | -450.7599 | -373.0831 | 0.4336 | 0.1858 |
|
76 |
+
| 0.5483 | 0.96 | 1200 | 0.5440 | -2.2940 | -3.0054 | 0.7090 | 0.7114 | -451.6765 | -373.9785 | 0.3244 | 0.0742 |
|
77 |
|
78 |
|
79 |
### Framework versions
|
80 |
|
81 |
+
- Transformers 4.35.2
|
82 |
- Pytorch 2.1.2+cu121
|
83 |
- Datasets 2.14.6
|
84 |
+
- Tokenizers 0.14.1
|
all_results.json
CHANGED
@@ -1,9 +1,8 @@
|
|
1 |
{
|
2 |
-
"epoch": 0
|
3 |
-
"
|
4 |
-
"
|
5 |
-
"train_runtime": 10529.1077,
|
6 |
"train_samples": 160800,
|
7 |
-
"train_samples_per_second":
|
8 |
-
"train_steps_per_second": 0.
|
9 |
}
|
|
|
1 |
{
|
2 |
+
"epoch": 1.0,
|
3 |
+
"train_loss": 0.5712926928784438,
|
4 |
+
"train_runtime": 11525.4961,
|
|
|
5 |
"train_samples": 160800,
|
6 |
+
"train_samples_per_second": 13.952,
|
7 |
+
"train_steps_per_second": 0.109
|
8 |
}
|
generation_config.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
-
"transformers_version": "4.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.35.2"
|
6 |
}
|
model-00001-of-00003.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4943162336
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:18e1cec63bd40f863dc594533ae9ac02d7bcdd4f57a17c1ef5d63193122a0814
|
3 |
size 4943162336
|
model-00002-of-00003.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4999819336
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:565d4244afeda54e7f62be9e162a16c6892085c081422f02c7a001ecce587eb6
|
3 |
size 4999819336
|
model-00003-of-00003.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4540516344
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0debf1533b3a9f2ffea91ddec7f947ba3d1c43476aedcef3273235a227bb4ce5
|
3 |
size 4540516344
|
train_results.json
CHANGED
@@ -1,9 +1,8 @@
|
|
1 |
{
|
2 |
-
"epoch": 0
|
3 |
-
"
|
4 |
-
"
|
5 |
-
"train_runtime": 10529.1077,
|
6 |
"train_samples": 160800,
|
7 |
-
"train_samples_per_second":
|
8 |
-
"train_steps_per_second": 0.
|
9 |
}
|
|
|
1 |
{
|
2 |
+
"epoch": 1.0,
|
3 |
+
"train_loss": 0.5712926928784438,
|
4 |
+
"train_runtime": 11525.4961,
|
|
|
5 |
"train_samples": 160800,
|
6 |
+
"train_samples_per_second": 13.952,
|
7 |
+
"train_steps_per_second": 0.109
|
8 |
}
|
trainer_state.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|