jikaixuan commited on
Commit
881e79b
1 Parent(s): 83caecd

Model save

Browse files
README.md CHANGED
@@ -1,14 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  library_name: peft
4
  tags:
5
- - alignment-handbook
6
  - trl
7
  - dpo
8
  - generated_from_trainer
9
  base_model: mistralai/Mistral-7B-v0.1
10
- datasets:
11
- - HuggingFaceH4/ultrafeedback_binarized
12
  model-index:
13
  - name: zephyr-7b
14
  results: []
@@ -19,19 +15,19 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # zephyr-7b
21
 
22
- This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.6789
25
- - Rewards/chosen: -0.5482
26
- - Rewards/rejected: -0.8623
27
- - Rewards/accuracies: 0.3591
28
- - Rewards/margins: 0.3141
29
- - Logps/rejected: -161.6313
30
- - Logps/chosen: -123.7209
31
- - Logits/rejected: 1.4916
32
- - Logits/chosen: 1.3712
33
- - Use Label: 17581.0469
34
- - Pred Label: 2490.9524
35
 
36
  ## Model description
37
 
@@ -68,15 +64,15 @@ The following hyperparameters were used during training:
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Use Label | Pred Label |
70
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:----------:|:----------:|
71
- | 0.6685 | 0.1 | 100 | 0.6684 | -0.0306 | -0.0936 | 0.3353 | 0.0631 | -84.7626 | -71.9572 | -2.0796 | -2.1097 | 1856.0 | 0.0 |
72
- | 0.676 | 0.21 | 200 | 0.6717 | -0.3729 | -0.4956 | 0.3214 | 0.1227 | -124.9563 | -106.1906 | -1.6889 | -1.7319 | 3898.4443 | 61.5556 |
73
- | 0.6728 | 0.31 | 300 | 0.6784 | -0.4712 | -0.7059 | 0.3373 | 0.2347 | -145.9853 | -116.0199 | -0.6762 | -0.7414 | 5793.0317 | 270.9683 |
74
- | 0.6715 | 0.42 | 400 | 0.6812 | -0.4462 | -0.7352 | 0.3552 | 0.2890 | -148.9146 | -113.5210 | 0.7648 | 0.6420 | 7595.3174 | 572.6826 |
75
- | 0.6744 | 0.52 | 500 | 0.6722 | -0.5121 | -0.7576 | 0.3413 | 0.2455 | -151.1573 | -120.1133 | 0.7128 | 0.6149 | 9378.1592 | 893.8412 |
76
- | 0.6784 | 0.63 | 600 | 0.6792 | -0.5107 | -0.8136 | 0.3512 | 0.3028 | -156.7531 | -119.9755 | 0.9939 | 0.8860 | 11169.8096 | 1206.1904 |
77
- | 0.6783 | 0.73 | 700 | 0.6756 | -0.6634 | -0.9598 | 0.3671 | 0.2964 | -171.3761 | -135.2395 | 1.2995 | 1.1927 | 12921.4766 | 1558.5238 |
78
- | 0.6776 | 0.84 | 800 | 0.6801 | -0.5500 | -0.8628 | 0.3532 | 0.3128 | -161.6791 | -123.9010 | 1.4789 | 1.3586 | 14683.5078 | 1900.4921 |
79
- | 0.6751 | 0.94 | 900 | 0.6790 | -0.5476 | -0.8618 | 0.3571 | 0.3143 | -161.5806 | -123.6563 | 1.4905 | 1.3693 | 16436.9844 | 2251.0159 |
80
 
81
 
82
  ### Framework versions
 
1
  ---
 
2
  library_name: peft
3
  tags:
 
4
  - trl
5
  - dpo
6
  - generated_from_trainer
7
  base_model: mistralai/Mistral-7B-v0.1
 
 
8
  model-index:
9
  - name: zephyr-7b
10
  results: []
 
15
 
16
  # zephyr-7b
17
 
18
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6907
21
+ - Rewards/chosen: -0.3413
22
+ - Rewards/rejected: -0.5651
23
+ - Rewards/accuracies: 0.3631
24
+ - Rewards/margins: 0.2238
25
+ - Logps/rejected: -131.9111
26
+ - Logps/chosen: -103.0301
27
+ - Logits/rejected: -0.1367
28
+ - Logits/chosen: -0.2437
29
+ - Use Label: 14866.4766
30
+ - Pred Label: 3821.5239
31
 
32
  ## Model description
33
 
 
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Use Label | Pred Label |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:----------:|:----------:|
67
+ | 0.6818 | 0.1 | 100 | 0.6814 | -0.0056 | -0.0496 | 0.3393 | 0.0440 | -80.3582 | -69.4632 | -2.0664 | -2.0975 | 1833.4603 | 22.5397 |
68
+ | 0.6818 | 0.21 | 200 | 0.6861 | -0.1358 | -0.2381 | 0.3373 | 0.1023 | -99.2068 | -82.4782 | -1.9938 | -2.0215 | 3701.2063 | 258.7936 |
69
+ | 0.6848 | 0.31 | 300 | 0.6877 | -0.2068 | -0.3388 | 0.3413 | 0.1320 | -109.2766 | -89.5763 | -1.8828 | -1.9157 | 5437.8730 | 626.1270 |
70
+ | 0.6857 | 0.42 | 400 | 0.6885 | -0.1802 | -0.3299 | 0.3532 | 0.1497 | -108.3913 | -86.9237 | -1.4031 | -1.4529 | 7112.4443 | 1055.5555 |
71
+ | 0.6894 | 0.52 | 500 | 0.6892 | -0.2862 | -0.4559 | 0.3552 | 0.1697 | -120.9922 | -97.5203 | -0.5997 | -0.6889 | 8741.4287 | 1530.5714 |
72
+ | 0.6881 | 0.63 | 600 | 0.6918 | -0.3826 | -0.6059 | 0.3532 | 0.2233 | -135.9845 | -107.1618 | -0.2548 | -0.3579 | 10293.6826 | 2082.3174 |
73
+ | 0.6913 | 0.73 | 700 | 0.6899 | -0.3542 | -0.5787 | 0.3671 | 0.2244 | -133.2637 | -104.3247 | -0.2462 | -0.3470 | 11806.4766 | 2673.5239 |
74
+ | 0.6893 | 0.84 | 800 | 0.6904 | -0.3443 | -0.5684 | 0.3631 | 0.2241 | -132.2416 | -103.3355 | -0.1293 | -0.2367 | 13331.9043 | 3252.0952 |
75
+ | 0.689 | 0.94 | 900 | 0.6907 | -0.3413 | -0.5651 | 0.3631 | 0.2238 | -131.9111 | -103.0301 | -0.1367 | -0.2437 | 14866.4766 | 3821.5239 |
76
 
77
 
78
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4616e865a950729019864b5c5aaca2adcf96c1a9ddea4fcdbddbb1fcbc6eb887
3
  size 671150064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a372ac4c2a71cd6d3d51983272071541becc25a55e74ef4b7ce122dbc6f2b513
3
  size 671150064
all_results.json CHANGED
@@ -1,23 +1,8 @@
1
  {
2
  "epoch": 1.0,
3
- "eval_logits/chosen": 1.3711830377578735,
4
- "eval_logits/rejected": 1.4916048049926758,
5
- "eval_logps/chosen": -123.7209243774414,
6
- "eval_logps/rejected": -161.63131713867188,
7
- "eval_loss": 0.6788680553436279,
8
- "eval_pred_label": 2490.952392578125,
9
- "eval_rewards/accuracies": 0.3591269850730896,
10
- "eval_rewards/chosen": -0.548203706741333,
11
- "eval_rewards/margins": 0.3141288757324219,
12
- "eval_rewards/rejected": -0.8623325824737549,
13
- "eval_runtime": 247.4536,
14
- "eval_samples": 2000,
15
- "eval_samples_per_second": 8.082,
16
- "eval_steps_per_second": 0.255,
17
- "eval_use_label": 17581.046875,
18
- "train_loss": 0.6760230718482851,
19
- "train_runtime": 20063.9235,
20
  "train_samples": 61135,
21
- "train_samples_per_second": 3.047,
22
  "train_steps_per_second": 0.048
23
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "train_loss": 0.6880922077838039,
4
+ "train_runtime": 20023.3666,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "train_samples": 61135,
6
+ "train_samples_per_second": 3.053,
7
  "train_steps_per_second": 0.048
8
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 1.0,
3
- "train_loss": 0.6760230718482851,
4
- "train_runtime": 20063.9235,
5
  "train_samples": 61135,
6
- "train_samples_per_second": 3.047,
7
  "train_steps_per_second": 0.048
8
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "train_loss": 0.6880922077838039,
4
+ "train_runtime": 20023.3666,
5
  "train_samples": 61135,
6
+ "train_samples_per_second": 3.053,
7
  "train_steps_per_second": 0.048
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff