wzhouad commited on
Commit
89b268e
1 Parent(s): 8586b26

Model save

Browse files
README.md CHANGED
@@ -13,19 +13,23 @@ model-index:
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
 
16
  # zephyr-7b-dpo-full
17
 
18
  This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.5440
21
- - Rewards/chosen: -2.2940
22
- - Rewards/rejected: -3.0054
23
- - Rewards/accuracies: 0.7090
24
- - Rewards/margins: 0.7114
25
- - Logps/rejected: -451.6765
26
- - Logps/chosen: -373.9785
27
- - Logits/rejected: 0.3244
28
- - Logits/chosen: 0.0742
 
 
 
29
 
30
  ## Model description
31
 
@@ -60,25 +64,25 @@ The following hyperparameters were used during training:
60
 
61
  ### Training results
62
 
63
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
- |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
- | 0.6789 | 0.08 | 100 | 0.6770 | -0.1062 | -0.1422 | 0.5914 | 0.0360 | -165.3552 | -155.1927 | -2.7255 | -2.7337 |
66
- | 0.6062 | 0.16 | 200 | 0.6079 | -1.0212 | -1.3873 | 0.6670 | 0.3660 | -289.8622 | -246.6971 | -2.3696 | -2.3856 |
67
- | 0.5965 | 0.24 | 300 | 0.5907 | -1.3779 | -1.8008 | 0.6623 | 0.4229 | -331.2100 | -282.3621 | -2.2450 | -2.2656 |
68
- | 0.5729 | 0.32 | 400 | 0.5711 | -1.6763 | -2.2404 | 0.6828 | 0.5640 | -375.1720 | -312.2064 | -1.2920 | -1.3760 |
69
- | 0.5645 | 0.4 | 500 | 0.5639 | -2.0721 | -2.6869 | 0.6987 | 0.6147 | -419.8194 | -351.7883 | -0.6091 | -0.7860 |
70
- | 0.5513 | 0.48 | 600 | 0.5582 | -2.9237 | -3.5389 | 0.7108 | 0.6152 | -505.0223 | -436.9386 | 0.1224 | -0.1054 |
71
- | 0.5571 | 0.56 | 700 | 0.5559 | -2.7971 | -3.5456 | 0.7043 | 0.7485 | -505.6961 | -424.2823 | 0.2980 | 0.0356 |
72
- | 0.5609 | 0.64 | 800 | 0.5469 | -2.4314 | -3.0831 | 0.7108 | 0.6517 | -459.4439 | -387.7092 | 0.1922 | -0.0312 |
73
- | 0.5514 | 0.72 | 900 | 0.5474 | -2.4774 | -3.2082 | 0.6996 | 0.7308 | -471.9533 | -392.3096 | 0.5382 | 0.2860 |
74
- | 0.527 | 0.8 | 1000 | 0.5454 | -2.5040 | -3.2071 | 0.7080 | 0.7031 | -471.8454 | -394.9711 | 0.6372 | 0.3871 |
75
- | 0.5487 | 0.88 | 1100 | 0.5444 | -2.2851 | -2.9963 | 0.7090 | 0.7112 | -450.7599 | -373.0831 | 0.4336 | 0.1858 |
76
- | 0.5483 | 0.96 | 1200 | 0.5440 | -2.2940 | -3.0054 | 0.7090 | 0.7114 | -451.6765 | -373.9785 | 0.3244 | 0.0742 |
77
 
78
 
79
  ### Framework versions
80
 
81
- - Transformers 4.35.2
82
  - Pytorch 2.1.2+cu121
83
  - Datasets 2.14.6
84
- - Tokenizers 0.14.1
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanqiang/wdpo/runs/593342mu)
17
  # zephyr-7b-dpo-full
18
 
19
  This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0314
22
+ - Rewards/chosen: -1.8016
23
+ - Rewards/rejected: -2.3386
24
+ - Rewards/accuracies: 0.6996
25
+ - Rewards/margins: 0.5369
26
+ - Logps/rejected: -384.5558
27
+ - Logps/chosen: -324.4056
28
+ - Logits/rejected: -1.9462
29
+ - Logits/chosen: -1.9728
30
+ - Debug/policy Weights: 0.0527
31
+ - Debug/losses: 0.0295
32
+ - Debug/raw Losses: 0.5653
33
 
34
  ## Model description
35
 
 
64
 
65
  ### Training results
66
 
67
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Debug/policy Weights | Debug/losses | Debug/raw Losses |
68
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------------------:|:------------:|:----------------:|
69
+ | 0.1733 | 0.0796 | 100 | 0.1633 | -0.1419 | -0.1761 | 0.5979 | 0.0341 | -168.3045 | -158.4381 | -2.7036 | -2.7117 | 0.2384 | 0.1618 | 0.6787 |
70
+ | 0.0826 | 0.1592 | 200 | 0.0750 | -0.7884 | -1.0004 | 0.6362 | 0.2120 | -250.7415 | -223.0814 | -2.5404 | -2.5508 | 0.1172 | 0.0736 | 0.6317 |
71
+ | 0.0515 | 0.2388 | 300 | 0.0573 | -1.2448 | -1.6060 | 0.6567 | 0.3612 | -311.3039 | -268.7243 | -2.3397 | -2.3553 | 0.0902 | 0.0558 | 0.6134 |
72
+ | 0.0343 | 0.3183 | 400 | 0.0302 | -1.7725 | -2.1338 | 0.6623 | 0.3614 | -364.0837 | -321.4913 | -2.2855 | -2.3007 | 0.0482 | 0.0284 | 0.5994 |
73
+ | 0.0432 | 0.3979 | 500 | 0.0432 | -1.5065 | -1.9835 | 0.6800 | 0.4770 | -349.0468 | -294.8951 | -2.2406 | -2.2643 | 0.0702 | 0.0407 | 0.5892 |
74
+ | 0.0342 | 0.4775 | 600 | 0.0321 | -1.8281 | -2.3049 | 0.6875 | 0.4769 | -381.1920 | -327.0503 | -2.1134 | -2.1351 | 0.0527 | 0.0302 | 0.5812 |
75
+ | 0.0283 | 0.5571 | 700 | 0.0283 | -1.8441 | -2.2808 | 0.6940 | 0.4366 | -378.7769 | -328.6566 | -1.9677 | -1.9900 | 0.0467 | 0.0268 | 0.5766 |
76
+ | 0.023 | 0.6367 | 800 | 0.0244 | -2.0670 | -2.5677 | 0.6884 | 0.5008 | -407.4723 | -350.9413 | -1.9268 | -1.9515 | 0.0400 | 0.0228 | 0.5787 |
77
+ | 0.032 | 0.7163 | 900 | 0.0335 | -1.7467 | -2.2731 | 0.6847 | 0.5264 | -378.0125 | -318.9173 | -1.9262 | -1.9521 | 0.0559 | 0.0316 | 0.5720 |
78
+ | 0.0294 | 0.7959 | 1000 | 0.0289 | -1.9406 | -2.4746 | 0.6866 | 0.5340 | -398.1603 | -338.3062 | -1.9318 | -1.9580 | 0.0484 | 0.0271 | 0.5695 |
79
+ | 0.0308 | 0.8754 | 1100 | 0.0311 | -1.8111 | -2.3364 | 0.7006 | 0.5253 | -384.3376 | -325.3560 | -1.9554 | -1.9814 | 0.0520 | 0.0291 | 0.5657 |
80
+ | 0.0303 | 0.9550 | 1200 | 0.0314 | -1.8016 | -2.3386 | 0.6996 | 0.5369 | -384.5558 | -324.4056 | -1.9462 | -1.9728 | 0.0527 | 0.0295 | 0.5653 |
81
 
82
 
83
  ### Framework versions
84
 
85
+ - Transformers 4.41.0.dev0
86
  - Pytorch 2.1.2+cu121
87
  - Datasets 2.14.6
88
+ - Tokenizers 0.19.1
all_results.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "epoch": 1.0,
3
- "train_loss": 0.5712926928784438,
4
- "train_runtime": 11570.2032,
 
5
  "train_samples": 160800,
6
- "train_samples_per_second": 13.898,
7
- "train_steps_per_second": 0.109
8
  }
 
1
  {
2
+ "epoch": 0.9996020692399522,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.05419013021620595,
5
+ "train_runtime": 10439.8283,
6
  "train_samples": 160800,
7
+ "train_samples_per_second": 15.403,
8
+ "train_steps_per_second": 0.12
9
  }
config.json CHANGED
@@ -3,6 +3,7 @@
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
 
6
  "bos_token_id": 1,
7
  "eos_token_id": 2,
8
  "hidden_act": "silu",
@@ -19,7 +20,7 @@
19
  "sliding_window": 4096,
20
  "tie_word_embeddings": false,
21
  "torch_dtype": "bfloat16",
22
- "transformers_version": "4.35.2",
23
  "use_cache": false,
24
  "vocab_size": 32000
25
  }
 
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
6
+ "attention_dropout": 0.0,
7
  "bos_token_id": 1,
8
  "eos_token_id": 2,
9
  "hidden_act": "silu",
 
20
  "sliding_window": 4096,
21
  "tie_word_embeddings": false,
22
  "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.41.0.dev0",
24
  "use_cache": false,
25
  "vocab_size": 32000
26
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
- "transformers_version": "4.35.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
  "eos_token_id": 2,
5
+ "transformers_version": "4.41.0.dev0"
6
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:18e1cec63bd40f863dc594533ae9ac02d7bcdd4f57a17c1ef5d63193122a0814
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:624756bc628d67c51f273fac678e315f8ba6fd54288ea173f833e9ae718b95a2
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:565d4244afeda54e7f62be9e162a16c6892085c081422f02c7a001ecce587eb6
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f15755f429340b206c16ed567b979564aececf809a2f9dd51e631db5b3263259
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0debf1533b3a9f2ffea91ddec7f947ba3d1c43476aedcef3273235a227bb4ce5
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:378e6024c6dcbae3d23d4d81403dcddd0b3c7461b5bd3d67ca94d6038178f1e8
3
  size 4540516344
tokenizer.json CHANGED
@@ -134,6 +134,7 @@
134
  "end_of_word_suffix": null,
135
  "fuse_unk": true,
136
  "byte_fallback": true,
 
137
  "vocab": {
138
  "<unk>": 0,
139
  "<s>": 1,
 
134
  "end_of_word_suffix": null,
135
  "fuse_unk": true,
136
  "byte_fallback": true,
137
+ "ignore_merges": false,
138
  "vocab": {
139
  "<unk>": 0,
140
  "<s>": 1,
tokenizer_config.json CHANGED
@@ -1,4 +1,6 @@
1
  {
 
 
2
  "added_tokens_decoder": {
3
  "0": {
4
  "content": "<unk>",
@@ -34,7 +36,6 @@
34
  "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
35
  "clean_up_tokenization_spaces": false,
36
  "eos_token": "</s>",
37
- "legacy": true,
38
  "model_max_length": 2048,
39
  "pad_token": "</s>",
40
  "sp_model_kwargs": {},
 
1
  {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
  "added_tokens_decoder": {
5
  "0": {
6
  "content": "<unk>",
 
36
  "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
37
  "clean_up_tokenization_spaces": false,
38
  "eos_token": "</s>",
 
39
  "model_max_length": 2048,
40
  "pad_token": "</s>",
41
  "sp_model_kwargs": {},
train_results.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "epoch": 1.0,
3
- "train_loss": 0.5712926928784438,
4
- "train_runtime": 11570.2032,
 
5
  "train_samples": 160800,
6
- "train_samples_per_second": 13.898,
7
- "train_steps_per_second": 0.109
8
  }
 
1
  {
2
+ "epoch": 0.9996020692399522,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.05419013021620595,
5
+ "train_runtime": 10439.8283,
6
  "train_samples": 160800,
7
+ "train_samples_per_second": 15.403,
8
+ "train_steps_per_second": 0.12
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:958233371dca87bdf35a6c068f9b6ae992d79adfabd8435819b38992d9192fd4
3
- size 5944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a93e4ac4c3121c6e616e15a605b2e1203065f3210634f1848b3deea40c1d6c8d
3
+ size 6456