nnheui commited on
Commit
91aef91
1 Parent(s): 728a94f

Model save

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - trl
4
+ - dpo
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: pythia-1.4b-dpo-full
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # pythia-1.4b-dpo-full
15
+
16
+ This model was trained from scratch on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.6403
19
+ - Rewards/chosen: 0.6094
20
+ - Rewards/rejected: 0.4102
21
+ - Rewards/accuracies: 0.5893
22
+ - Rewards/margins: 0.2002
23
+ - Logps/rejected: -2024.0
24
+ - Logps/chosen: -2320.0
25
+ - Logits/rejected: -0.6719
26
+ - Logits/chosen: -0.6172
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-07
46
+ - train_batch_size: 5
47
+ - eval_batch_size: 8
48
+ - seed: 42
49
+ - distributed_type: multi-GPU
50
+ - num_devices: 6
51
+ - total_train_batch_size: 30
52
+ - total_eval_batch_size: 48
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: cosine
55
+ - lr_scheduler_warmup_ratio: 0.1
56
+ - num_epochs: 1
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
+ | 0.684 | 0.05 | 100 | 0.6768 | 0.2314 | 0.1904 | 0.4494 | 0.0405 | -2048.0 | -2352.0 | -0.7227 | -0.6641 |
63
+ | 0.663 | 0.1 | 200 | 0.6566 | 0.5977 | 0.4883 | 0.4940 | 0.1108 | -2016.0 | -2320.0 | -0.7266 | -0.6680 |
64
+ | 0.6529 | 0.15 | 300 | 0.6513 | 0.625 | 0.4941 | 0.5149 | 0.1279 | -2016.0 | -2320.0 | -0.7188 | -0.6562 |
65
+ | 0.6371 | 0.2 | 400 | 0.6491 | 0.6562 | 0.5 | 0.5595 | 0.1523 | -2016.0 | -2304.0 | -0.7266 | -0.6680 |
66
+ | 0.6206 | 0.25 | 500 | 0.6466 | 0.5391 | 0.3945 | 0.5952 | 0.1445 | -2024.0 | -2320.0 | -0.7148 | -0.6562 |
67
+ | 0.686 | 0.29 | 600 | 0.6446 | 0.5781 | 0.4180 | 0.5714 | 0.1592 | -2024.0 | -2320.0 | -0.7188 | -0.6602 |
68
+ | 0.6459 | 0.34 | 700 | 0.6449 | 0.5508 | 0.3633 | 0.6012 | 0.1885 | -2032.0 | -2320.0 | -0.6875 | -0.6289 |
69
+ | 0.6458 | 0.39 | 800 | 0.6421 | 0.5586 | 0.3867 | 0.5774 | 0.1709 | -2024.0 | -2320.0 | -0.6953 | -0.6406 |
70
+ | 0.6451 | 0.44 | 900 | 0.6398 | 0.7109 | 0.5039 | 0.5685 | 0.2070 | -2016.0 | -2304.0 | -0.6719 | -0.6133 |
71
+ | 0.6213 | 0.49 | 1000 | 0.6407 | 0.7734 | 0.5742 | 0.5714 | 0.2012 | -2008.0 | -2304.0 | -0.6602 | -0.6016 |
72
+ | 0.6313 | 0.54 | 1100 | 0.6387 | 0.5391 | 0.3555 | 0.5893 | 0.1807 | -2032.0 | -2320.0 | -0.6680 | -0.6094 |
73
+ | 0.6298 | 0.59 | 1200 | 0.6380 | 0.6953 | 0.4922 | 0.6042 | 0.2031 | -2016.0 | -2304.0 | -0.6523 | -0.5977 |
74
+ | 0.6461 | 0.64 | 1300 | 0.6396 | 0.5586 | 0.3613 | 0.5863 | 0.1963 | -2032.0 | -2320.0 | -0.6914 | -0.6367 |
75
+ | 0.6258 | 0.69 | 1400 | 0.6360 | 0.6914 | 0.4727 | 0.5923 | 0.2207 | -2016.0 | -2304.0 | -0.6758 | -0.6172 |
76
+ | 0.6347 | 0.74 | 1500 | 0.6375 | 0.625 | 0.4141 | 0.5893 | 0.2100 | -2024.0 | -2320.0 | -0.6641 | -0.6094 |
77
+ | 0.6185 | 0.79 | 1600 | 0.6382 | 0.5977 | 0.3926 | 0.6042 | 0.2051 | -2032.0 | -2320.0 | -0.6797 | -0.625 |
78
+ | 0.6408 | 0.83 | 1700 | 0.6374 | 0.5977 | 0.3926 | 0.5952 | 0.2041 | -2024.0 | -2320.0 | -0.6719 | -0.6172 |
79
+ | 0.662 | 0.88 | 1800 | 0.6355 | 0.6094 | 0.3984 | 0.6012 | 0.2119 | -2024.0 | -2320.0 | -0.6836 | -0.6289 |
80
+ | 0.6385 | 0.93 | 1900 | 0.6379 | 0.6055 | 0.3926 | 0.625 | 0.2129 | -2024.0 | -2320.0 | -0.6758 | -0.6211 |
81
+ | 0.6154 | 0.98 | 2000 | 0.6381 | 0.6094 | 0.4043 | 0.6012 | 0.2041 | -2024.0 | -2320.0 | -0.6758 | -0.6211 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.38.2
87
+ - Pytorch 2.2.1
88
+ - Datasets 2.14.6
89
+ - Tokenizers 0.15.2
all_results.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -0.6171875,
4
+ "eval_logits/rejected": -0.671875,
5
+ "eval_logps/chosen": -2320.0,
6
+ "eval_logps/rejected": -2024.0,
7
+ "eval_loss": 0.6403203010559082,
8
+ "eval_rewards/accuracies": 0.5892857313156128,
9
+ "eval_rewards/chosen": 0.609375,
10
+ "eval_rewards/margins": 0.2001953125,
11
+ "eval_rewards/rejected": 0.41015625,
12
+ "eval_runtime": 85.4736,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 23.399,
15
+ "eval_steps_per_second": 0.491,
16
+ "train_loss": 0.6502101503246783,
17
+ "train_runtime": 8979.6364,
18
+ "train_samples": 61135,
19
+ "train_samples_per_second": 6.808,
20
+ "train_steps_per_second": 0.227
21
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": -0.6171875,
4
+ "eval_logits/rejected": -0.671875,
5
+ "eval_logps/chosen": -2320.0,
6
+ "eval_logps/rejected": -2024.0,
7
+ "eval_loss": 0.6403203010559082,
8
+ "eval_rewards/accuracies": 0.5892857313156128,
9
+ "eval_rewards/chosen": 0.609375,
10
+ "eval_rewards/margins": 0.2001953125,
11
+ "eval_rewards/rejected": 0.41015625,
12
+ "eval_runtime": 85.4736,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 23.399,
15
+ "eval_steps_per_second": 0.491
16
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.38.2",
6
+ "use_cache": false
7
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd0b9b8a5b077f6c401b661f1c6871d9a28b0c2dce3a5c81fffda007157eebe5
3
  size 2829330208
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7985dff3ea63585b23c86c702954f01e9d87d382f21dd6aabd582482c423b6c8
3
  size 2829330208
runs/Mar16_17-07-37_42dbe5cf9ed4/events.out.tfevents.1710608905.42dbe5cf9ed4.122044.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1270cdc6a2b9e287092cc212ecb6f5dd566a68bb3c9b9fadbf008b8f94542ecc
3
- size 157832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0e72b00ac5dbbb6e307a6ab3a7af648842bdb1091255ec55fb93acae8200d35
3
+ size 160250
runs/Mar16_17-07-37_42dbe5cf9ed4/events.out.tfevents.1710618290.42dbe5cf9ed4.122044.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6493742c34cbbd2b9dac6469da3c5973f97b305658467bee9931e32d29e0b3e1
3
+ size 828
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.6502101503246783,
4
+ "train_runtime": 8979.6364,
5
+ "train_samples": 61135,
6
+ "train_samples_per_second": 6.808,
7
+ "train_steps_per_second": 0.227
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff