wzhouad commited on
Commit
fe08d08
1 Parent(s): 6d2458f

Model save

Browse files
README.md CHANGED
@@ -13,19 +13,23 @@ model-index:
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
 
16
  # zephyr-7b-dpo-full
17
 
18
  This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.5440
21
- - Rewards/chosen: -2.2940
22
- - Rewards/rejected: -3.0054
23
- - Rewards/accuracies: 0.7090
24
- - Rewards/margins: 0.7114
25
- - Logps/rejected: -451.6765
26
- - Logps/chosen: -373.9785
27
- - Logits/rejected: 0.3244
28
- - Logits/chosen: 0.0742
 
 
 
29
 
30
  ## Model description
31
 
@@ -60,25 +64,25 @@ The following hyperparameters were used during training:
60
 
61
  ### Training results
62
 
63
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
- |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
- | 0.6789 | 0.08 | 100 | 0.6770 | -0.1062 | -0.1422 | 0.5914 | 0.0360 | -165.3552 | -155.1927 | -2.7255 | -2.7337 |
66
- | 0.6062 | 0.16 | 200 | 0.6079 | -1.0212 | -1.3873 | 0.6670 | 0.3660 | -289.8622 | -246.6971 | -2.3696 | -2.3856 |
67
- | 0.5965 | 0.24 | 300 | 0.5907 | -1.3779 | -1.8008 | 0.6623 | 0.4229 | -331.2100 | -282.3621 | -2.2450 | -2.2656 |
68
- | 0.5729 | 0.32 | 400 | 0.5711 | -1.6763 | -2.2404 | 0.6828 | 0.5640 | -375.1720 | -312.2064 | -1.2920 | -1.3760 |
69
- | 0.5645 | 0.4 | 500 | 0.5639 | -2.0721 | -2.6869 | 0.6987 | 0.6147 | -419.8194 | -351.7883 | -0.6091 | -0.7860 |
70
- | 0.5513 | 0.48 | 600 | 0.5582 | -2.9237 | -3.5389 | 0.7108 | 0.6152 | -505.0223 | -436.9386 | 0.1224 | -0.1054 |
71
- | 0.5571 | 0.56 | 700 | 0.5559 | -2.7971 | -3.5456 | 0.7043 | 0.7485 | -505.6961 | -424.2823 | 0.2980 | 0.0356 |
72
- | 0.5609 | 0.64 | 800 | 0.5469 | -2.4314 | -3.0831 | 0.7108 | 0.6517 | -459.4439 | -387.7092 | 0.1922 | -0.0312 |
73
- | 0.5514 | 0.72 | 900 | 0.5474 | -2.4774 | -3.2082 | 0.6996 | 0.7308 | -471.9533 | -392.3096 | 0.5382 | 0.2860 |
74
- | 0.527 | 0.8 | 1000 | 0.5454 | -2.5040 | -3.2071 | 0.7080 | 0.7031 | -471.8454 | -394.9711 | 0.6372 | 0.3871 |
75
- | 0.5487 | 0.88 | 1100 | 0.5444 | -2.2851 | -2.9963 | 0.7090 | 0.7112 | -450.7599 | -373.0831 | 0.4336 | 0.1858 |
76
- | 0.5483 | 0.96 | 1200 | 0.5440 | -2.2940 | -3.0054 | 0.7090 | 0.7114 | -451.6765 | -373.9785 | 0.3244 | 0.0742 |
77
 
78
 
79
  ### Framework versions
80
 
81
- - Transformers 4.35.2
82
  - Pytorch 2.1.2+cu121
83
  - Datasets 2.14.6
84
- - Tokenizers 0.14.1
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanqiang/wdpo/runs/mswxqy0x)
17
  # zephyr-7b-dpo-full
18
 
19
  This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0227
22
+ - Rewards/chosen: -2.3113
23
+ - Rewards/rejected: -2.8479
24
+ - Rewards/accuracies: 0.6931
25
+ - Rewards/margins: 0.5365
26
+ - Logps/rejected: -435.4867
27
+ - Logps/chosen: -375.3782
28
+ - Logits/rejected: -1.4622
29
+ - Logits/chosen: -1.5834
30
+ - Debug/policy Weights: 0.0374
31
+ - Debug/losses: 0.0212
32
+ - Debug/raw Losses: 0.5682
33
 
34
  ## Model description
35
 
 
64
 
65
  ### Training results
66
 
67
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Debug/policy Weights | Debug/losses | Debug/raw Losses |
68
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------------------:|:------------:|:----------------:|
69
+ | 0.1734 | 0.0796 | 100 | 0.1631 | -0.1425 | -0.1765 | 0.5924 | 0.0340 | -168.3475 | -158.4907 | -2.7043 | -2.7124 | 0.2381 | 0.1616 | 0.6787 |
70
+ | 0.0795 | 0.1592 | 200 | 0.0826 | -0.7160 | -0.9309 | 0.6483 | 0.2150 | -243.7922 | -215.8411 | -2.4879 | -2.4997 | 0.1266 | 0.0800 | 0.6296 |
71
+ | 0.0545 | 0.2388 | 300 | 0.0572 | -1.0974 | -1.4187 | 0.6642 | 0.3213 | -292.5661 | -253.9808 | -2.4160 | -2.4302 | 0.0894 | 0.0550 | 0.6166 |
72
+ | 0.0288 | 0.3183 | 400 | 0.0302 | -1.9563 | -2.3772 | 0.6698 | 0.4209 | -388.4184 | -339.8692 | -2.2376 | -2.2573 | 0.0477 | 0.0287 | 0.6044 |
73
+ | 0.0358 | 0.3979 | 500 | 0.0407 | -1.7169 | -2.1543 | 0.6698 | 0.4374 | -366.1241 | -315.9322 | -2.2265 | -2.2540 | 0.0659 | 0.0394 | 0.6064 |
74
+ | 0.0309 | 0.4775 | 600 | 0.0302 | -1.9504 | -2.4092 | 0.6660 | 0.4587 | -391.6147 | -339.2857 | -2.0849 | -2.1159 | 0.0489 | 0.0287 | 0.5899 |
75
+ | 0.0203 | 0.5571 | 700 | 0.0198 | -2.3315 | -2.7643 | 0.6856 | 0.4328 | -427.1261 | -377.3937 | -1.6613 | -1.7384 | 0.0317 | 0.0185 | 0.5808 |
76
+ | 0.0192 | 0.6367 | 800 | 0.0182 | -2.5929 | -3.1225 | 0.6866 | 0.5297 | -462.9526 | -403.5321 | -1.0483 | -1.2122 | 0.0290 | 0.0169 | 0.5789 |
77
+ | 0.0233 | 0.7163 | 900 | 0.0237 | -2.3310 | -2.8931 | 0.6810 | 0.5621 | -440.0111 | -377.3470 | -1.3096 | -1.4493 | 0.0387 | 0.0221 | 0.5726 |
78
+ | 0.0213 | 0.7959 | 1000 | 0.0219 | -2.4229 | -2.9606 | 0.6931 | 0.5377 | -446.7564 | -386.5316 | -1.4880 | -1.6049 | 0.0357 | 0.0203 | 0.5694 |
79
+ | 0.0229 | 0.8754 | 1100 | 0.0231 | -2.2736 | -2.7873 | 0.6950 | 0.5137 | -429.4283 | -371.6010 | -1.5527 | -1.6574 | 0.0379 | 0.0215 | 0.5695 |
80
+ | 0.0216 | 0.9550 | 1200 | 0.0227 | -2.3113 | -2.8479 | 0.6931 | 0.5365 | -435.4867 | -375.3782 | -1.4622 | -1.5834 | 0.0374 | 0.0212 | 0.5682 |
81
 
82
 
83
  ### Framework versions
84
 
85
+ - Transformers 4.41.0.dev0
86
  - Pytorch 2.1.2+cu121
87
  - Datasets 2.14.6
88
+ - Tokenizers 0.19.1
all_results.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "epoch": 1.0,
3
- "train_loss": 0.5712926928784438,
4
- "train_runtime": 11681.9838,
 
5
  "train_samples": 160800,
6
- "train_samples_per_second": 13.765,
7
- "train_steps_per_second": 0.108
8
  }
 
1
  {
2
+ "epoch": 0.9996020692399522,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.048100706314442646,
5
+ "train_runtime": 10605.3952,
6
  "train_samples": 160800,
7
+ "train_samples_per_second": 15.162,
8
+ "train_steps_per_second": 0.118
9
  }
config.json CHANGED
@@ -3,6 +3,7 @@
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
 
6
  "bos_token_id": 1,
7
  "eos_token_id": 2,
8
  "hidden_act": "silu",
@@ -19,7 +20,7 @@
19
  "sliding_window": 4096,
20
  "tie_word_embeddings": false,
21
  "torch_dtype": "bfloat16",
22
- "transformers_version": "4.35.2",
23
  "use_cache": false,
24
  "vocab_size": 32000
25
  }
 
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
6
+ "attention_dropout": 0.0,
7
  "bos_token_id": 1,
8
  "eos_token_id": 2,
9
  "hidden_act": "silu",
 
20
  "sliding_window": 4096,
21
  "tie_word_embeddings": false,
22
  "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.41.0.dev0",
24
  "use_cache": false,
25
  "vocab_size": 32000
26
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
- "transformers_version": "4.35.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
+ "transformers_version": "4.41.0.dev0"
6
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:18e1cec63bd40f863dc594533ae9ac02d7bcdd4f57a17c1ef5d63193122a0814
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbb7d949f028367b0795dd95d9b15c3501adbb8c78279b67e8356a1dd530d4b8
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:565d4244afeda54e7f62be9e162a16c6892085c081422f02c7a001ecce587eb6
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74b494a031cf899c7987f6b280d9daac8b29b89f36e9c3341cf052c4960ee7be
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0debf1533b3a9f2ffea91ddec7f947ba3d1c43476aedcef3273235a227bb4ce5
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe47e8bd6edfde1a3b06b5ea0c64c96609fb424d4d29f8a8c90c61e8a396c3f0
3
  size 4540516344
tokenizer.json CHANGED
@@ -134,6 +134,7 @@
134
  "end_of_word_suffix": null,
135
  "fuse_unk": true,
136
  "byte_fallback": true,
 
137
  "vocab": {
138
  "<unk>": 0,
139
  "<s>": 1,
 
134
  "end_of_word_suffix": null,
135
  "fuse_unk": true,
136
  "byte_fallback": true,
137
+ "ignore_merges": false,
138
  "vocab": {
139
  "<unk>": 0,
140
  "<s>": 1,
tokenizer_config.json CHANGED
@@ -1,4 +1,6 @@
1
  {
 
 
2
  "added_tokens_decoder": {
3
  "0": {
4
  "content": "<unk>",
@@ -34,7 +36,6 @@
34
  "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
35
  "clean_up_tokenization_spaces": false,
36
  "eos_token": "</s>",
37
- "legacy": true,
38
  "model_max_length": 2048,
39
  "pad_token": "</s>",
40
  "sp_model_kwargs": {},
 
1
  {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
  "added_tokens_decoder": {
5
  "0": {
6
  "content": "<unk>",
 
36
  "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
37
  "clean_up_tokenization_spaces": false,
38
  "eos_token": "</s>",
 
39
  "model_max_length": 2048,
40
  "pad_token": "</s>",
41
  "sp_model_kwargs": {},
train_results.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "epoch": 1.0,
3
- "train_loss": 0.5712926928784438,
4
- "train_runtime": 11681.9838,
 
5
  "train_samples": 160800,
6
- "train_samples_per_second": 13.765,
7
- "train_steps_per_second": 0.108
8
  }
 
1
  {
2
+ "epoch": 0.9996020692399522,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.048100706314442646,
5
+ "train_runtime": 10605.3952,
6
  "train_samples": 160800,
7
+ "train_samples_per_second": 15.162,
8
+ "train_steps_per_second": 0.118
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8bb53d245849c3f5bf20cd13d560c4772558c0e85063a62167df319248b606df
3
- size 5944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21b832048d693e96f2216cb45a51485f398d64979535a99ee00ff1b4c6d780ea
3
+ size 6456