wzhouad commited on
Commit
4ff61cc
1 Parent(s): 536ccd2

Model save

Browse files
README.md CHANGED
@@ -16,19 +16,6 @@ should probably proofread and complete it, then remove this comment. -->
16
  # zephyr-7b-dpo-full
17
 
18
  This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 0.0050
21
- - Rewards/chosen: -1.4001
22
- - Rewards/rejected: -1.6028
23
- - Rewards/accuracies: 0.6101
24
- - Rewards/margins: 0.2027
25
- - Logps/rejected: -311.4186
26
- - Logps/chosen: -284.5843
27
- - Logits/rejected: -2.3597
28
- - Logits/chosen: -2.3712
29
- - Debug/policy Weights: 0.0065
30
- - Debug/losses: 0.0043
31
- - Debug/raw Losses: 0.6467
32
 
33
  ## Model description
34
 
@@ -50,7 +37,7 @@ The following hyperparameters were used during training:
50
  - learning_rate: 5e-07
51
  - train_batch_size: 8
52
  - eval_batch_size: 8
53
- - seed: 42
54
  - distributed_type: multi-GPU
55
  - num_devices: 8
56
  - gradient_accumulation_steps: 2
@@ -63,20 +50,6 @@ The following hyperparameters were used during training:
63
 
64
  ### Training results
65
 
66
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Debug/policy Weights | Debug/losses | Debug/raw Losses |
67
- |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------------------:|:------------:|:----------------:|
68
- | 0.0493 | 0.08 | 100 | 0.0465 | -0.1430 | -0.1669 | 0.5905 | 0.0239 | -167.8270 | -158.8738 | -2.7118 | -2.7198 | 0.0662 | 0.0453 | 0.6827 |
69
- | 0.008 | 0.16 | 200 | 0.0085 | -0.8731 | -0.9349 | 0.5849 | 0.0618 | -244.6274 | -231.8869 | -2.5966 | -2.6043 | 0.0119 | 0.0079 | 0.6736 |
70
- | 0.007 | 0.24 | 300 | 0.0084 | -0.9513 | -1.0922 | 0.6110 | 0.1409 | -260.3554 | -239.7077 | -2.5912 | -2.6003 | 0.0117 | 0.0076 | 0.6574 |
71
- | 0.0065 | 0.32 | 400 | 0.0079 | -1.0231 | -1.1828 | 0.6315 | 0.1597 | -269.4169 | -246.8811 | -2.6498 | -2.6594 | 0.0111 | 0.0074 | 0.6524 |
72
- | 0.0054 | 0.4 | 500 | 0.0054 | -1.2147 | -1.3633 | 0.6054 | 0.1486 | -287.4665 | -266.0439 | -2.5605 | -2.5698 | 0.0072 | 0.0047 | 0.6539 |
73
- | 0.0045 | 0.48 | 600 | 0.0047 | -1.3549 | -1.5390 | 0.6147 | 0.1841 | -305.0333 | -280.0664 | -2.4873 | -2.4979 | 0.0061 | 0.0039 | 0.6487 |
74
- | 0.0058 | 0.56 | 700 | 0.0061 | -1.2583 | -1.4259 | 0.6045 | 0.1676 | -293.7197 | -270.4000 | -2.4961 | -2.5069 | 0.0079 | 0.0050 | 0.6482 |
75
- | 0.0039 | 0.64 | 800 | 0.0037 | -1.5176 | -1.6919 | 0.5896 | 0.1743 | -320.3220 | -296.3322 | -2.4092 | -2.4203 | 0.0048 | 0.0032 | 0.6584 |
76
- | 0.005 | 0.72 | 900 | 0.0049 | -1.3883 | -1.5704 | 0.6063 | 0.1820 | -308.1698 | -283.4043 | -2.3689 | -2.3800 | 0.0064 | 0.0042 | 0.6486 |
77
- | 0.0044 | 0.8 | 1000 | 0.0049 | -1.4264 | -1.6227 | 0.5989 | 0.1963 | -313.4052 | -287.2113 | -2.3493 | -2.3607 | 0.0063 | 0.0041 | 0.6497 |
78
- | 0.0059 | 0.88 | 1100 | 0.0051 | -1.3862 | -1.5910 | 0.6110 | 0.2047 | -310.2328 | -283.1982 | -2.3569 | -2.3684 | 0.0067 | 0.0044 | 0.6455 |
79
- | 0.0043 | 0.96 | 1200 | 0.0050 | -1.4001 | -1.6028 | 0.6101 | 0.2027 | -311.4186 | -284.5843 | -2.3597 | -2.3712 | 0.0065 | 0.0043 | 0.6467 |
80
 
81
 
82
  ### Framework versions
 
16
  # zephyr-7b-dpo-full
17
 
18
  This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the None dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Model description
21
 
 
37
  - learning_rate: 5e-07
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
+ - seed: 1
41
  - distributed_type: multi-GPU
42
  - num_devices: 8
43
  - gradient_accumulation_steps: 2
 
50
 
51
  ### Training results
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
 
55
  ### Framework versions
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 1.0,
3
- "train_loss": 0.01215437763655309,
4
- "train_runtime": 11158.2549,
5
- "train_samples": 160800,
6
- "train_samples_per_second": 14.411,
7
- "train_steps_per_second": 0.113
8
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "train_loss": 0.6583965634836734,
4
+ "train_runtime": 2364.6005,
5
+ "train_samples": 39494,
6
+ "train_samples_per_second": 16.702,
7
+ "train_steps_per_second": 0.131
8
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f07ea723b5a4f4136e492ae69d3665813ff7d23e6749a21988a9a3eeaf0c0f3a
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba6718190a29880b843d57531592a142f2ad5e4232e06a91841efc9b7eb1c271
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:620339b4fe05468a571491ab152f8ed5a30a27d4ae4bdadc41e57f2a212d0e5b
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ef54856db951e19586c7aee1eb24c55e74bd8278a5682d4db9f31dbee6e40b4
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:148235695f6e08d4c98d258464fe682c7a309502a667a5f257f0c2c4e5cc08e5
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfd53846094ea6efce814b428ea5c366e3c48aa8f0a7ba355ee48ca6ea31bc19
3
  size 4540516344
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 1.0,
3
- "train_loss": 0.01215437763655309,
4
- "train_runtime": 11158.2549,
5
- "train_samples": 160800,
6
- "train_samples_per_second": 14.411,
7
- "train_steps_per_second": 0.113
8
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "train_loss": 0.6583965634836734,
4
+ "train_runtime": 2364.6005,
5
+ "train_samples": 39494,
6
+ "train_samples_per_second": 16.702,
7
+ "train_steps_per_second": 0.131
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:86c6076ce7ce9037a45003b5096c65ec05df5e9ccb968e3d7c09b8899236c16d
3
  size 5944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0245667145154017ec234b4557665ef5cfae6ea784767b84f1042b62f16399a5
3
  size 5944