wzhouad commited on
Commit
f74ccda
1 Parent(s): 53ab59e

Model save

Browse files
README.md CHANGED
@@ -14,16 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
14
  # zephyr-7b-dpo-full
15
 
16
  This model was trained from scratch on the None dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 0.5304
19
- - Rewards/chosen: -2.0877
20
- - Rewards/rejected: -3.5813
21
- - Rewards/accuracies: 0.7969
22
- - Rewards/margins: 1.4935
23
- - Logps/rejected: -669.7546
24
- - Logps/chosen: -512.3596
25
- - Logits/rejected: -0.0805
26
- - Logits/chosen: 0.0044
27
 
28
  ## Model description
29
 
@@ -43,12 +33,12 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 1e-06
46
- - train_batch_size: 4
47
  - eval_batch_size: 8
48
  - seed: 4
49
  - distributed_type: multi-GPU
50
  - num_devices: 8
51
- - gradient_accumulation_steps: 4
52
  - total_train_batch_size: 128
53
  - total_eval_batch_size: 64
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -58,17 +48,6 @@ The following hyperparameters were used during training:
58
 
59
  ### Training results
60
 
61
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
62
- |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
63
- | 0.6209 | 0.21 | 100 | 0.6260 | -0.2545 | -0.6360 | 0.6953 | 0.3815 | -375.2259 | -329.0332 | 0.3909 | 0.3389 |
64
- | 0.5447 | 0.42 | 200 | 0.5506 | -0.6231 | -1.4102 | 0.7852 | 0.7871 | -452.6489 | -365.8961 | 0.5763 | 0.5112 |
65
- | 0.5257 | 0.63 | 300 | 0.5325 | -0.9411 | -1.9562 | 0.7617 | 1.0151 | -507.2468 | -397.6915 | 0.2916 | 0.2656 |
66
- | 0.5016 | 0.84 | 400 | 0.5168 | -1.1370 | -2.2739 | 0.7930 | 1.1368 | -539.0118 | -417.2896 | 0.1301 | 0.1581 |
67
- | 0.3557 | 1.05 | 500 | 0.5232 | -1.5956 | -3.0234 | 0.7852 | 1.4277 | -613.9626 | -463.1488 | 0.1606 | 0.1971 |
68
- | 0.3459 | 1.26 | 600 | 0.5179 | -1.8408 | -3.1490 | 0.7969 | 1.3082 | -626.5206 | -487.6650 | 0.1677 | 0.2149 |
69
- | 0.3321 | 1.47 | 700 | 0.5331 | -2.0696 | -3.5323 | 0.7891 | 1.4626 | -664.8507 | -510.5476 | -0.0690 | 0.0033 |
70
- | 0.2983 | 1.67 | 800 | 0.5289 | -2.0385 | -3.4952 | 0.7930 | 1.4566 | -661.1429 | -507.4386 | -0.0746 | 0.0060 |
71
- | 0.3235 | 1.88 | 900 | 0.5304 | -2.0877 | -3.5813 | 0.7969 | 1.4935 | -669.7546 | -512.3596 | -0.0805 | 0.0044 |
72
 
73
 
74
  ### Framework versions
 
14
  # zephyr-7b-dpo-full
15
 
16
  This model was trained from scratch on the None dataset.
 
 
 
 
 
 
 
 
 
 
17
 
18
  ## Model description
19
 
 
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 1e-06
36
+ - train_batch_size: 2
37
  - eval_batch_size: 8
38
  - seed: 4
39
  - distributed_type: multi-GPU
40
  - num_devices: 8
41
+ - gradient_accumulation_steps: 8
42
  - total_train_batch_size: 128
43
  - total_eval_batch_size: 64
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
48
 
49
  ### Training results
50
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
 
53
  ### Framework versions
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 2.0,
3
- "train_loss": 0.44723546317538376,
4
- "train_runtime": 8668.6976,
5
- "train_samples": 61134,
6
- "train_samples_per_second": 14.105,
7
- "train_steps_per_second": 0.11
8
  }
 
1
  {
2
  "epoch": 2.0,
3
+ "train_loss": 0.11496639693108927,
4
+ "train_runtime": 24857.3347,
5
+ "train_samples": 106682,
6
+ "train_samples_per_second": 8.584,
7
+ "train_steps_per_second": 0.067
8
  }
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4400d5cc92103182c0288602258bcb6a357097c64fdd3c11599cc42caa1ffea8
3
  size 4976698672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b548fafbd9efff630df189b0bf00fc664acb471a1214bad1dee2a5ea6974fe6
3
  size 4976698672
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d49e56f1ffb7e3b39edf0324f5f7b5fa8049cb6b5243bb8606d58cea6fd54f4c
3
  size 4999802720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b67fc61d7577b913d32c3bfd018089599c95780f44fe89e5e1dbedb56a40586
3
  size 4999802720
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:be757ff962fe896c71369dfc072edc1b45a8a8170b10760f068e2d5c05ec06f0
3
  size 4915916176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdd3743d2c166fcadbed6ea88c7574aec5bcf890a0dddb4fb54b675acfb7c9e8
3
  size 4915916176
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f1b4fe6c8278ecf71a36df42146155fd60f564232f4a0499a05cab556a07fb1
3
  size 1168138808
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f3acebda78f6b10ff7250ea1c5c6805c4ea09dbadf2d15c05e328527d01207c
3
  size 1168138808
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 2.0,
3
- "train_loss": 0.44723546317538376,
4
- "train_runtime": 8668.6976,
5
- "train_samples": 61134,
6
- "train_samples_per_second": 14.105,
7
- "train_steps_per_second": 0.11
8
  }
 
1
  {
2
  "epoch": 2.0,
3
+ "train_loss": 0.11496639693108927,
4
+ "train_runtime": 24857.3347,
5
+ "train_samples": 106682,
6
+ "train_samples_per_second": 8.584,
7
+ "train_steps_per_second": 0.067
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf44d047df68bef9d42ea33d5bfa7fdecd645ea74f86b115b9c82ad801a096e6
3
  size 6648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd7260ee010ca4a604f5b589d0f690f6be98dc3409be725d29e2e433c0a137f5
3
  size 6648