chchen commited on
Commit
e573166
1 Parent(s): 2c27dfb

Model save

Browse files
Files changed (2) hide show
  1. README.md +77 -0
  2. trainer_log.jsonl +19 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - llama-factory
8
+ - generated_from_trainer
9
+ base_model: lmsys/vicuna-7b-v1.5
10
+ model-index:
11
+ - name: Vicuna-7B-v1.5-ORPO
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # Vicuna-7B-v1.5-ORPO
19
+
20
+ This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.0073
23
+ - Rewards/chosen: -0.0940
24
+ - Rewards/rejected: -0.1081
25
+ - Rewards/accuracies: 0.5160
26
+ - Rewards/margins: 0.0141
27
+ - Logps/rejected: -1.0807
28
+ - Logps/chosen: -0.9399
29
+ - Logits/rejected: -0.2988
30
+ - Logits/chosen: -0.3321
31
+ - Sft Loss: 0.9399
32
+ - Odds Ratio Loss: 0.6739
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+ - learning_rate: 5e-06
52
+ - train_batch_size: 2
53
+ - eval_batch_size: 2
54
+ - seed: 42
55
+ - gradient_accumulation_steps: 8
56
+ - total_train_batch_size: 16
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_steps: 0.1
60
+ - num_epochs: 3.0
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
66
+ | 1.0913 | 0.8891 | 500 | 1.0354 | -0.0968 | -0.1107 | 0.5180 | 0.0140 | -1.1075 | -0.9676 | -0.3176 | -0.3490 | 0.9676 | 0.6776 |
67
+ | 1.0328 | 1.7782 | 1000 | 1.0126 | -0.0945 | -0.1086 | 0.5160 | 0.0141 | -1.0856 | -0.9451 | -0.2979 | -0.3308 | 0.9451 | 0.6748 |
68
+ | 0.9998 | 2.6673 | 1500 | 1.0073 | -0.0940 | -0.1081 | 0.5160 | 0.0141 | -1.0807 | -0.9399 | -0.2988 | -0.3321 | 0.9399 | 0.6739 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - PEFT 0.10.0
74
+ - Transformers 4.40.1
75
+ - Pytorch 2.3.0
76
+ - Datasets 2.19.0
77
+ - Tokenizers 0.19.1
trainer_log.jsonl CHANGED
@@ -151,3 +151,22 @@
151
  {"current_steps": 1490, "total_steps": 1686, "loss": 1.0255, "accuracy": 0.48750001192092896, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "4:07:41", "remaining_time": "0:32:34"}
152
  {"current_steps": 1500, "total_steps": 1686, "loss": 0.9998, "accuracy": 0.5249999761581421, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:09:20", "remaining_time": "0:30:55"}
153
  {"current_steps": 1500, "total_steps": 1686, "eval_loss": 1.0073015689849854, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:12:26", "remaining_time": "0:31:18"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  {"current_steps": 1490, "total_steps": 1686, "loss": 1.0255, "accuracy": 0.48750001192092896, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "4:07:41", "remaining_time": "0:32:34"}
152
  {"current_steps": 1500, "total_steps": 1686, "loss": 0.9998, "accuracy": 0.5249999761581421, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:09:20", "remaining_time": "0:30:55"}
153
  {"current_steps": 1500, "total_steps": 1686, "eval_loss": 1.0073015689849854, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "4:12:26", "remaining_time": "0:31:18"}
154
+ {"current_steps": 1510, "total_steps": 1686, "loss": 0.9915, "accuracy": 0.5625, "learning_rate": 1.3325243551706057e-07, "epoch": 2.685041120248944, "percentage": 89.56, "elapsed_time": "4:14:03", "remaining_time": "0:29:36"}
155
+ {"current_steps": 1520, "total_steps": 1686, "loss": 0.9727, "accuracy": 0.550000011920929, "learning_rate": 1.1865786358165737e-07, "epoch": 2.702822849522116, "percentage": 90.15, "elapsed_time": "4:15:39", "remaining_time": "0:27:55"}
156
+ {"current_steps": 1530, "total_steps": 1686, "loss": 1.1147, "accuracy": 0.512499988079071, "learning_rate": 1.0489017710262311e-07, "epoch": 2.720604578795288, "percentage": 90.75, "elapsed_time": "4:17:16", "remaining_time": "0:26:13"}
157
+ {"current_steps": 1540, "total_steps": 1686, "loss": 1.0195, "accuracy": 0.46875, "learning_rate": 9.195415670326446e-08, "epoch": 2.73838630806846, "percentage": 91.34, "elapsed_time": "4:18:55", "remaining_time": "0:24:32"}
158
+ {"current_steps": 1550, "total_steps": 1686, "loss": 1.0188, "accuracy": 0.4124999940395355, "learning_rate": 7.985429422327384e-08, "epoch": 2.7561680373416317, "percentage": 91.93, "elapsed_time": "4:20:28", "remaining_time": "0:22:51"}
159
+ {"current_steps": 1560, "total_steps": 1686, "loss": 0.9834, "accuracy": 0.4749999940395355, "learning_rate": 6.859479115900818e-08, "epoch": 2.773949766614803, "percentage": 92.53, "elapsed_time": "4:22:05", "remaining_time": "0:21:10"}
160
+ {"current_steps": 1570, "total_steps": 1686, "loss": 1.0133, "accuracy": 0.5, "learning_rate": 5.817955720457902e-08, "epoch": 2.791731495887975, "percentage": 93.12, "elapsed_time": "4:23:41", "remaining_time": "0:19:28"}
161
+ {"current_steps": 1580, "total_steps": 1686, "loss": 1.012, "accuracy": 0.512499988079071, "learning_rate": 4.861220889427199e-08, "epoch": 2.809513225161147, "percentage": 93.71, "elapsed_time": "4:25:14", "remaining_time": "0:17:47"}
162
+ {"current_steps": 1590, "total_steps": 1686, "loss": 1.0172, "accuracy": 0.5375000238418579, "learning_rate": 3.9896068346758074e-08, "epoch": 2.827294954434319, "percentage": 94.31, "elapsed_time": "4:26:49", "remaining_time": "0:16:06"}
163
+ {"current_steps": 1600, "total_steps": 1686, "loss": 1.0071, "accuracy": 0.4312500059604645, "learning_rate": 3.203416211153832e-08, "epoch": 2.8450766837074903, "percentage": 94.9, "elapsed_time": "4:28:28", "remaining_time": "0:14:25"}
164
+ {"current_steps": 1610, "total_steps": 1686, "loss": 1.0176, "accuracy": 0.4124999940395355, "learning_rate": 2.5029220118019393e-08, "epoch": 2.8628584129806622, "percentage": 95.49, "elapsed_time": "4:30:06", "remaining_time": "0:12:45"}
165
+ {"current_steps": 1620, "total_steps": 1686, "loss": 0.9328, "accuracy": 0.625, "learning_rate": 1.8883674727586122e-08, "epoch": 2.880640142253834, "percentage": 96.09, "elapsed_time": "4:31:41", "remaining_time": "0:11:04"}
166
+ {"current_steps": 1630, "total_steps": 1686, "loss": 0.9816, "accuracy": 0.4124999940395355, "learning_rate": 1.3599659889000639e-08, "epoch": 2.898421871527006, "percentage": 96.68, "elapsed_time": "4:33:20", "remaining_time": "0:09:23"}
167
+ {"current_steps": 1640, "total_steps": 1686, "loss": 1.1156, "accuracy": 0.44999998807907104, "learning_rate": 9.179010397421528e-09, "epoch": 2.916203600800178, "percentage": 97.27, "elapsed_time": "4:35:02", "remaining_time": "0:07:42"}
168
+ {"current_steps": 1650, "total_steps": 1686, "loss": 0.9291, "accuracy": 0.4937500059604645, "learning_rate": 5.623261257296509e-09, "epoch": 2.93398533007335, "percentage": 97.86, "elapsed_time": "4:36:38", "remaining_time": "0:06:02"}
169
+ {"current_steps": 1660, "total_steps": 1686, "loss": 0.9945, "accuracy": 0.5249999761581421, "learning_rate": 2.933647149357122e-09, "epoch": 2.9517670593465217, "percentage": 98.46, "elapsed_time": "4:38:15", "remaining_time": "0:04:21"}
170
+ {"current_steps": 1670, "total_steps": 1686, "loss": 1.0009, "accuracy": 0.4625000059604645, "learning_rate": 1.1111020018930717e-09, "epoch": 2.969548788619693, "percentage": 99.05, "elapsed_time": "4:39:52", "remaining_time": "0:02:40"}
171
+ {"current_steps": 1680, "total_steps": 1686, "loss": 0.9695, "accuracy": 0.5874999761581421, "learning_rate": 1.5625866646051813e-10, "epoch": 2.987330517892865, "percentage": 99.64, "elapsed_time": "4:41:29", "remaining_time": "0:01:00"}
172
+ {"current_steps": 1686, "total_steps": 1686, "epoch": 2.997999555456768, "percentage": 100.0, "elapsed_time": "4:42:30", "remaining_time": "0:00:00"}