Model save
Browse files- README.md +29 -25
- all_results.json +6 -5
- config.json +2 -1
- generation_config.json +1 -1
- model-00001-of-00003.safetensors +1 -1
- model-00002-of-00003.safetensors +1 -1
- model-00003-of-00003.safetensors +1 -1
- tokenizer.json +1 -0
- tokenizer_config.json +2 -1
- train_results.json +6 -5
- trainer_state.json +0 -0
- training_args.bin +2 -2
README.md
CHANGED
@@ -13,19 +13,23 @@ model-index:
|
|
13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
|
|
16 |
# zephyr-7b-dpo-full
|
17 |
|
18 |
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
- Loss: 0.
|
21 |
-
- Rewards/chosen: -2.
|
22 |
-
- Rewards/rejected: -
|
23 |
-
- Rewards/accuracies: 0.
|
24 |
-
- Rewards/margins: 0.
|
25 |
-
- Logps/rejected: -
|
26 |
-
- Logps/chosen: -
|
27 |
-
- Logits/rejected:
|
28 |
-
- Logits/chosen:
|
|
|
|
|
|
|
29 |
|
30 |
## Model description
|
31 |
|
@@ -60,25 +64,25 @@ The following hyperparameters were used during training:
|
|
60 |
|
61 |
### Training results
|
62 |
|
63 |
-
| Training Loss | Epoch
|
64 |
-
|
65 |
-
| 0.
|
66 |
-
| 0.
|
67 |
-
| 0.
|
68 |
-
| 0.
|
69 |
-
| 0.
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
|
78 |
|
79 |
### Framework versions
|
80 |
|
81 |
-
- Transformers 4.
|
82 |
- Pytorch 2.1.2+cu121
|
83 |
- Datasets 2.14.6
|
84 |
-
- Tokenizers 0.
|
|
|
13 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
16 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanqiang/wdpo/runs/mswxqy0x)
|
17 |
# zephyr-7b-dpo-full
|
18 |
|
19 |
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.0227
|
22 |
+
- Rewards/chosen: -2.3113
|
23 |
+
- Rewards/rejected: -2.8479
|
24 |
+
- Rewards/accuracies: 0.6931
|
25 |
+
- Rewards/margins: 0.5365
|
26 |
+
- Logps/rejected: -435.4867
|
27 |
+
- Logps/chosen: -375.3782
|
28 |
+
- Logits/rejected: -1.4622
|
29 |
+
- Logits/chosen: -1.5834
|
30 |
+
- Debug/policy Weights: 0.0374
|
31 |
+
- Debug/losses: 0.0212
|
32 |
+
- Debug/raw Losses: 0.5682
|
33 |
|
34 |
## Model description
|
35 |
|
|
|
64 |
|
65 |
### Training results
|
66 |
|
67 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Debug/policy Weights | Debug/losses | Debug/raw Losses |
|
68 |
+
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------------------:|:------------:|:----------------:|
|
69 |
+
| 0.1734 | 0.0796 | 100 | 0.1631 | -0.1425 | -0.1765 | 0.5924 | 0.0340 | -168.3475 | -158.4907 | -2.7043 | -2.7124 | 0.2381 | 0.1616 | 0.6787 |
|
70 |
+
| 0.0795 | 0.1592 | 200 | 0.0826 | -0.7160 | -0.9309 | 0.6483 | 0.2150 | -243.7922 | -215.8411 | -2.4879 | -2.4997 | 0.1266 | 0.0800 | 0.6296 |
|
71 |
+
| 0.0545 | 0.2388 | 300 | 0.0572 | -1.0974 | -1.4187 | 0.6642 | 0.3213 | -292.5661 | -253.9808 | -2.4160 | -2.4302 | 0.0894 | 0.0550 | 0.6166 |
|
72 |
+
| 0.0288 | 0.3183 | 400 | 0.0302 | -1.9563 | -2.3772 | 0.6698 | 0.4209 | -388.4184 | -339.8692 | -2.2376 | -2.2573 | 0.0477 | 0.0287 | 0.6044 |
|
73 |
+
| 0.0358 | 0.3979 | 500 | 0.0407 | -1.7169 | -2.1543 | 0.6698 | 0.4374 | -366.1241 | -315.9322 | -2.2265 | -2.2540 | 0.0659 | 0.0394 | 0.6064 |
|
74 |
+
| 0.0309 | 0.4775 | 600 | 0.0302 | -1.9504 | -2.4092 | 0.6660 | 0.4587 | -391.6147 | -339.2857 | -2.0849 | -2.1159 | 0.0489 | 0.0287 | 0.5899 |
|
75 |
+
| 0.0203 | 0.5571 | 700 | 0.0198 | -2.3315 | -2.7643 | 0.6856 | 0.4328 | -427.1261 | -377.3937 | -1.6613 | -1.7384 | 0.0317 | 0.0185 | 0.5808 |
|
76 |
+
| 0.0192 | 0.6367 | 800 | 0.0182 | -2.5929 | -3.1225 | 0.6866 | 0.5297 | -462.9526 | -403.5321 | -1.0483 | -1.2122 | 0.0290 | 0.0169 | 0.5789 |
|
77 |
+
| 0.0233 | 0.7163 | 900 | 0.0237 | -2.3310 | -2.8931 | 0.6810 | 0.5621 | -440.0111 | -377.3470 | -1.3096 | -1.4493 | 0.0387 | 0.0221 | 0.5726 |
|
78 |
+
| 0.0213 | 0.7959 | 1000 | 0.0219 | -2.4229 | -2.9606 | 0.6931 | 0.5377 | -446.7564 | -386.5316 | -1.4880 | -1.6049 | 0.0357 | 0.0203 | 0.5694 |
|
79 |
+
| 0.0229 | 0.8754 | 1100 | 0.0231 | -2.2736 | -2.7873 | 0.6950 | 0.5137 | -429.4283 | -371.6010 | -1.5527 | -1.6574 | 0.0379 | 0.0215 | 0.5695 |
|
80 |
+
| 0.0216 | 0.9550 | 1200 | 0.0227 | -2.3113 | -2.8479 | 0.6931 | 0.5365 | -435.4867 | -375.3782 | -1.4622 | -1.5834 | 0.0374 | 0.0212 | 0.5682 |
|
81 |
|
82 |
|
83 |
### Framework versions
|
84 |
|
85 |
+
- Transformers 4.41.0.dev0
|
86 |
- Pytorch 2.1.2+cu121
|
87 |
- Datasets 2.14.6
|
88 |
+
- Tokenizers 0.19.1
|
all_results.json
CHANGED
@@ -1,8 +1,9 @@
|
|
1 |
{
|
2 |
-
"epoch":
|
3 |
-
"
|
4 |
-
"
|
|
|
5 |
"train_samples": 160800,
|
6 |
-
"train_samples_per_second":
|
7 |
-
"train_steps_per_second": 0.
|
8 |
}
|
|
|
1 |
{
|
2 |
+
"epoch": 0.9996020692399522,
|
3 |
+
"total_flos": 0.0,
|
4 |
+
"train_loss": 0.048100706314442646,
|
5 |
+
"train_runtime": 10605.3952,
|
6 |
"train_samples": 160800,
|
7 |
+
"train_samples_per_second": 15.162,
|
8 |
+
"train_steps_per_second": 0.118
|
9 |
}
|
config.json
CHANGED
@@ -3,6 +3,7 @@
|
|
3 |
"architectures": [
|
4 |
"MistralForCausalLM"
|
5 |
],
|
|
|
6 |
"bos_token_id": 1,
|
7 |
"eos_token_id": 2,
|
8 |
"hidden_act": "silu",
|
@@ -19,7 +20,7 @@
|
|
19 |
"sliding_window": 4096,
|
20 |
"tie_word_embeddings": false,
|
21 |
"torch_dtype": "bfloat16",
|
22 |
-
"transformers_version": "4.
|
23 |
"use_cache": false,
|
24 |
"vocab_size": 32000
|
25 |
}
|
|
|
3 |
"architectures": [
|
4 |
"MistralForCausalLM"
|
5 |
],
|
6 |
+
"attention_dropout": 0.0,
|
7 |
"bos_token_id": 1,
|
8 |
"eos_token_id": 2,
|
9 |
"hidden_act": "silu",
|
|
|
20 |
"sliding_window": 4096,
|
21 |
"tie_word_embeddings": false,
|
22 |
"torch_dtype": "bfloat16",
|
23 |
+
"transformers_version": "4.41.0.dev0",
|
24 |
"use_cache": false,
|
25 |
"vocab_size": 32000
|
26 |
}
|
generation_config.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
-
"transformers_version": "4.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.41.0.dev0"
|
6 |
}
|
model-00001-of-00003.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4943162336
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cbb7d949f028367b0795dd95d9b15c3501adbb8c78279b67e8356a1dd530d4b8
|
3 |
size 4943162336
|
model-00002-of-00003.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4999819336
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:74b494a031cf899c7987f6b280d9daac8b29b89f36e9c3341cf052c4960ee7be
|
3 |
size 4999819336
|
model-00003-of-00003.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4540516344
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fe47e8bd6edfde1a3b06b5ea0c64c96609fb424d4d29f8a8c90c61e8a396c3f0
|
3 |
size 4540516344
|
tokenizer.json
CHANGED
@@ -134,6 +134,7 @@
|
|
134 |
"end_of_word_suffix": null,
|
135 |
"fuse_unk": true,
|
136 |
"byte_fallback": true,
|
|
|
137 |
"vocab": {
|
138 |
"<unk>": 0,
|
139 |
"<s>": 1,
|
|
|
134 |
"end_of_word_suffix": null,
|
135 |
"fuse_unk": true,
|
136 |
"byte_fallback": true,
|
137 |
+
"ignore_merges": false,
|
138 |
"vocab": {
|
139 |
"<unk>": 0,
|
140 |
"<s>": 1,
|
tokenizer_config.json
CHANGED
@@ -1,4 +1,6 @@
|
|
1 |
{
|
|
|
|
|
2 |
"added_tokens_decoder": {
|
3 |
"0": {
|
4 |
"content": "<unk>",
|
@@ -34,7 +36,6 @@
|
|
34 |
"chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
|
35 |
"clean_up_tokenization_spaces": false,
|
36 |
"eos_token": "</s>",
|
37 |
-
"legacy": true,
|
38 |
"model_max_length": 2048,
|
39 |
"pad_token": "</s>",
|
40 |
"sp_model_kwargs": {},
|
|
|
1 |
{
|
2 |
+
"add_bos_token": true,
|
3 |
+
"add_eos_token": false,
|
4 |
"added_tokens_decoder": {
|
5 |
"0": {
|
6 |
"content": "<unk>",
|
|
|
36 |
"chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
|
37 |
"clean_up_tokenization_spaces": false,
|
38 |
"eos_token": "</s>",
|
|
|
39 |
"model_max_length": 2048,
|
40 |
"pad_token": "</s>",
|
41 |
"sp_model_kwargs": {},
|
train_results.json
CHANGED
@@ -1,8 +1,9 @@
|
|
1 |
{
|
2 |
-
"epoch":
|
3 |
-
"
|
4 |
-
"
|
|
|
5 |
"train_samples": 160800,
|
6 |
-
"train_samples_per_second":
|
7 |
-
"train_steps_per_second": 0.
|
8 |
}
|
|
|
1 |
{
|
2 |
+
"epoch": 0.9996020692399522,
|
3 |
+
"total_flos": 0.0,
|
4 |
+
"train_loss": 0.048100706314442646,
|
5 |
+
"train_runtime": 10605.3952,
|
6 |
"train_samples": 160800,
|
7 |
+
"train_samples_per_second": 15.162,
|
8 |
+
"train_steps_per_second": 0.118
|
9 |
}
|
trainer_state.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:21b832048d693e96f2216cb45a51485f398d64979535a99ee00ff1b4c6d780ea
|
3 |
+
size 6456
|