lewtun HF staff commited on
Commit
b865206
1 Parent(s): 82c11d4

Model save

Browse files
README.md CHANGED
@@ -1,30 +1,32 @@
1
  ---
2
  license: apache-2.0
3
- base_model: mistralai/Mistral-7B-v0.1
4
  tags:
 
 
5
  - generated_from_trainer
6
- - alignment-handbook
7
  model-index:
8
- - name: zephyr-7b-dpo-lora
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # zephyr-7b-dpo-lora
16
 
17
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5270
20
- - Rewards/chosen: -0.1210
21
- - Rewards/rejected: -0.9978
22
- - Rewards/accuracies: 0.7812
23
- - Rewards/margins: 0.8768
24
- - Logps/rejected: -198.5849
25
- - Logps/chosen: -248.6519
26
- - Logits/rejected: -1.9190
27
- - Logits/chosen: -2.0860
28
 
29
  ## Model description
30
 
@@ -43,31 +45,48 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 5e-07
47
- - train_batch_size: 2
48
- - eval_batch_size: 4
49
  - seed: 42
50
  - distributed_type: multi-GPU
51
- - num_devices: 32
52
- - total_train_batch_size: 64
53
- - total_eval_batch_size: 128
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
- - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_ratio: 0.1
57
- - num_epochs: 3
58
 
59
  ### Training results
60
 
61
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
62
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
63
- | 0.5491 | 1.0 | 969 | 0.5563 | -0.0962 | -0.7226 | 0.7812 | 0.6263 | -195.8333 | -248.4046 | -1.9755 | -2.1375 |
64
- | 0.5454 | 2.0 | 1938 | 0.5312 | -0.1249 | -0.9600 | 0.7969 | 0.8351 | -198.2077 | -248.6910 | -1.9316 | -2.0971 |
65
- | 0.5242 | 3.0 | 2907 | 0.5270 | -0.1210 | -0.9978 | 0.7812 | 0.8768 | -198.5849 | -248.6519 | -1.9190 | -2.0860 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
 
68
  ### Framework versions
69
 
70
- - Transformers 4.35.0
71
- - Pytorch 2.1.0+cu118
 
72
  - Datasets 2.14.6
73
- - Tokenizers 0.14.1
 
1
  ---
2
  license: apache-2.0
3
+ library_name: peft
4
  tags:
5
+ - trl
6
+ - dpo
7
  - generated_from_trainer
8
+ base_model: mistralai/Mistral-7B-v0.1
9
  model-index:
10
+ - name: zephyr-7b-dpo-qlora
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # zephyr-7b-dpo-qlora
18
 
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.5325
22
+ - Rewards/chosen: -1.2325
23
+ - Rewards/rejected: -2.0565
24
+ - Rewards/accuracies: 0.7656
25
+ - Rewards/margins: 0.8240
26
+ - Logps/rejected: -457.4398
27
+ - Logps/chosen: -373.4022
28
+ - Logits/rejected: 0.7596
29
+ - Logits/chosen: 0.5001
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 4
50
+ - eval_batch_size: 8
51
  - seed: 42
52
  - distributed_type: multi-GPU
53
+ - num_devices: 8
54
+ - total_train_batch_size: 32
55
+ - total_eval_batch_size: 64
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: cosine
58
  - lr_scheduler_warmup_ratio: 0.1
59
+ - num_epochs: 1
60
 
61
  ### Training results
62
 
63
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.6916 | 0.05 | 100 | 0.6912 | 0.0059 | 0.0019 | 0.6484 | 0.0041 | -251.6075 | -249.5596 | -2.2040 | -2.2621 |
66
+ | 0.655 | 0.1 | 200 | 0.6498 | -0.0559 | -0.1762 | 0.7070 | 0.1203 | -269.4106 | -255.7421 | -2.1011 | -2.1614 |
67
+ | 0.6342 | 0.16 | 300 | 0.6146 | -0.3407 | -0.6269 | 0.7031 | 0.2862 | -314.4839 | -284.2224 | -1.9037 | -1.9793 |
68
+ | 0.6121 | 0.21 | 400 | 0.5946 | -0.4657 | -0.8916 | 0.7031 | 0.4259 | -340.9551 | -296.7203 | -1.8717 | -1.9543 |
69
+ | 0.5973 | 0.26 | 500 | 0.5938 | -0.3681 | -0.7766 | 0.7305 | 0.4085 | -329.4522 | -286.9666 | -1.8440 | -1.9282 |
70
+ | 0.5473 | 0.31 | 600 | 0.5774 | -0.6893 | -1.2264 | 0.7344 | 0.5371 | -374.4341 | -319.0812 | -1.6815 | -1.7726 |
71
+ | 0.5792 | 0.37 | 700 | 0.5709 | -0.6635 | -1.2100 | 0.7578 | 0.5465 | -372.7989 | -316.5072 | -1.4783 | -1.5775 |
72
+ | 0.5194 | 0.42 | 800 | 0.5590 | -1.0208 | -1.6453 | 0.7461 | 0.6245 | -416.3269 | -352.2357 | -0.3791 | -0.5486 |
73
+ | 0.5367 | 0.47 | 900 | 0.5492 | -1.1477 | -1.8521 | 0.7266 | 0.7044 | -437.0040 | -364.9276 | -0.0908 | -0.2899 |
74
+ | 0.5575 | 0.52 | 1000 | 0.5450 | -1.1704 | -1.9048 | 0.7344 | 0.7344 | -442.2755 | -367.1964 | 0.2761 | 0.0498 |
75
+ | 0.5507 | 0.58 | 1100 | 0.5429 | -1.1040 | -1.8671 | 0.7422 | 0.7631 | -438.5026 | -360.5551 | 0.5339 | 0.2877 |
76
+ | 0.5305 | 0.63 | 1200 | 0.5366 | -1.1557 | -1.9243 | 0.7578 | 0.7686 | -444.2217 | -365.7241 | 0.7350 | 0.4755 |
77
+ | 0.5171 | 0.68 | 1300 | 0.5304 | -1.3741 | -2.1678 | 0.7656 | 0.7937 | -468.5735 | -387.5681 | 0.7686 | 0.5029 |
78
+ | 0.4875 | 0.73 | 1400 | 0.5321 | -1.3228 | -2.1513 | 0.7578 | 0.8285 | -466.9267 | -382.4329 | 0.8566 | 0.5926 |
79
+ | 0.5216 | 0.78 | 1500 | 0.5326 | -1.2006 | -2.0034 | 0.7617 | 0.8028 | -452.1298 | -370.2103 | 0.7189 | 0.4630 |
80
+ | 0.4894 | 0.84 | 1600 | 0.5327 | -1.2300 | -2.0556 | 0.7656 | 0.8256 | -457.3565 | -373.1585 | 0.7405 | 0.4828 |
81
+ | 0.5179 | 0.89 | 1700 | 0.5326 | -1.2313 | -2.0558 | 0.7656 | 0.8245 | -457.3720 | -373.2860 | 0.7604 | 0.5012 |
82
+ | 0.5534 | 0.94 | 1800 | 0.5325 | -1.2309 | -2.0558 | 0.7656 | 0.8249 | -457.3779 | -373.2437 | 0.7550 | 0.4957 |
83
+ | 0.5539 | 0.99 | 1900 | 0.5325 | -1.2325 | -2.0565 | 0.7656 | 0.8240 | -457.4398 | -373.4022 | 0.7596 | 0.5001 |
84
 
85
 
86
  ### Framework versions
87
 
88
+ - PEFT 0.7.1
89
+ - Transformers 4.36.2
90
+ - Pytorch 2.1.2+cu121
91
  - Datasets 2.14.6
92
+ - Tokenizers 0.15.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a61e6c64f98d1de332121cd4934fc387468e1434815d637ddcad2b444c849f7e
3
  size 83945744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:881e1b5a4dd0347641273b3dcdd5ce52a7e613d1712bb56b80cc13e114765f7c
3
  size 83945744
all_results.json CHANGED
@@ -1,21 +1,21 @@
1
  {
2
- "epoch": 3.0,
3
- "eval_logits/chosen": -2.085988998413086,
4
- "eval_logits/rejected": -1.9190013408660889,
5
- "eval_logps/chosen": -248.65191650390625,
6
- "eval_logps/rejected": -198.58494567871094,
7
- "eval_loss": 0.5269633531570435,
8
- "eval_rewards/accuracies": 0.78125,
9
- "eval_rewards/chosen": -0.12098389863967896,
10
- "eval_rewards/margins": 0.8767741918563843,
11
- "eval_rewards/rejected": -0.9977580308914185,
12
- "eval_runtime": 49.9631,
13
  "eval_samples": 2000,
14
- "eval_samples_per_second": 40.03,
15
- "eval_steps_per_second": 0.32,
16
- "train_loss": 0.5643668570057567,
17
- "train_runtime": 8096.9375,
18
- "train_samples": 61966,
19
- "train_samples_per_second": 22.959,
20
- "train_steps_per_second": 0.359
21
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": 0.5000983476638794,
4
+ "eval_logits/rejected": 0.7595670819282532,
5
+ "eval_logps/chosen": -373.40216064453125,
6
+ "eval_logps/rejected": -457.4398498535156,
7
+ "eval_loss": 0.5325239300727844,
8
+ "eval_rewards/accuracies": 0.765625,
9
+ "eval_rewards/chosen": -1.2324851751327515,
10
+ "eval_rewards/margins": 0.8239741921424866,
11
+ "eval_rewards/rejected": -2.056459426879883,
12
+ "eval_runtime": 99.4029,
13
  "eval_samples": 2000,
14
+ "eval_samples_per_second": 20.12,
15
+ "eval_steps_per_second": 0.322,
16
+ "train_loss": 0.5648497628454511,
17
+ "train_runtime": 7610.489,
18
+ "train_samples": 61135,
19
+ "train_samples_per_second": 8.033,
20
+ "train_steps_per_second": 0.251
21
  }
eval_results.json CHANGED
@@ -1,16 +1,16 @@
1
  {
2
- "epoch": 3.0,
3
- "eval_logits/chosen": -2.085988998413086,
4
- "eval_logits/rejected": -1.9190013408660889,
5
- "eval_logps/chosen": -248.65191650390625,
6
- "eval_logps/rejected": -198.58494567871094,
7
- "eval_loss": 0.5269633531570435,
8
- "eval_rewards/accuracies": 0.78125,
9
- "eval_rewards/chosen": -0.12098389863967896,
10
- "eval_rewards/margins": 0.8767741918563843,
11
- "eval_rewards/rejected": -0.9977580308914185,
12
- "eval_runtime": 49.9631,
13
  "eval_samples": 2000,
14
- "eval_samples_per_second": 40.03,
15
- "eval_steps_per_second": 0.32
16
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "eval_logits/chosen": 0.5000983476638794,
4
+ "eval_logits/rejected": 0.7595670819282532,
5
+ "eval_logps/chosen": -373.40216064453125,
6
+ "eval_logps/rejected": -457.4398498535156,
7
+ "eval_loss": 0.5325239300727844,
8
+ "eval_rewards/accuracies": 0.765625,
9
+ "eval_rewards/chosen": -1.2324851751327515,
10
+ "eval_rewards/margins": 0.8239741921424866,
11
+ "eval_rewards/rejected": -2.056459426879883,
12
+ "eval_runtime": 99.4029,
13
  "eval_samples": 2000,
14
+ "eval_samples_per_second": 20.12,
15
+ "eval_steps_per_second": 0.322
16
  }
runs/Jan09_05-07-46_ip-26-0-175-170/events.out.tfevents.1704777003.ip-26-0-175-170.1799139.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47e9959cc5b37bb0410582cd3391cf75152b1aaa9acdb145e33e75bf5d54c137
3
- size 139556
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07ae93d49c31ce3dd08cfe99bd93ac32a5d111827d3113468a1ef52d46c8e90c
3
+ size 140544
runs/Jan09_05-07-46_ip-26-0-175-170/events.out.tfevents.1704784713.ip-26-0-175-170.1799139.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea4e75f57b9d81409e2cae26268f15bb6b03b1bfe6164baadf83b6b0a10d5156
3
+ size 828
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 3.0,
3
- "train_loss": 0.5643668570057567,
4
- "train_runtime": 8096.9375,
5
- "train_samples": 61966,
6
- "train_samples_per_second": 22.959,
7
- "train_steps_per_second": 0.359
8
  }
 
1
  {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.5648497628454511,
4
+ "train_runtime": 7610.489,
5
+ "train_samples": 61135,
6
+ "train_samples_per_second": 8.033,
7
+ "train_steps_per_second": 0.251
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff