Shakhovak commited on
Commit
d76de04
1 Parent(s): 39b4c59

End of training

Browse files
Files changed (3) hide show
  1. README.md +33 -13
  2. adapter_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.0075
19
 
20
  ## Model description
21
 
@@ -34,7 +34,7 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 0.0003
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
@@ -43,23 +43,43 @@ The following hyperparameters were used during training:
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 2
46
- - training_steps: 400
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 0.2512 | 0.12 | 40 | 0.0197 |
54
- | 0.0283 | 0.25 | 80 | 0.0189 |
55
- | 0.017 | 0.37 | 120 | 0.0144 |
56
- | 0.0146 | 0.5 | 160 | 0.0127 |
57
- | 0.0113 | 0.62 | 200 | 0.0111 |
58
- | 0.0108 | 0.74 | 240 | 0.0095 |
59
- | 0.0092 | 0.87 | 280 | 0.0089 |
60
- | 0.0085 | 0.99 | 320 | 0.0084 |
61
- | 0.0052 | 1.12 | 360 | 0.0079 |
62
- | 0.0033 | 1.24 | 400 | 0.0075 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.0076
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 3e-05
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 2
46
+ - training_steps: 1200
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 0.8188 | 0.13 | 40 | 0.1393 |
54
+ | 0.0712 | 0.25 | 80 | 0.0204 |
55
+ | 0.0191 | 0.38 | 120 | 0.0189 |
56
+ | 0.0157 | 0.5 | 160 | 0.0167 |
57
+ | 0.0148 | 0.63 | 200 | 0.0150 |
58
+ | 0.013 | 0.75 | 240 | 0.0134 |
59
+ | 0.0129 | 0.88 | 280 | 0.0133 |
60
+ | 0.0127 | 1.0 | 320 | 0.0120 |
61
+ | 0.0091 | 1.13 | 360 | 0.0114 |
62
+ | 0.0091 | 1.25 | 400 | 0.0110 |
63
+ | 0.0081 | 1.38 | 440 | 0.0109 |
64
+ | 0.0092 | 1.5 | 480 | 0.0106 |
65
+ | 0.0084 | 1.63 | 520 | 0.0101 |
66
+ | 0.0082 | 1.75 | 560 | 0.0099 |
67
+ | 0.0077 | 1.88 | 600 | 0.0098 |
68
+ | 0.007 | 2.0 | 640 | 0.0091 |
69
+ | 0.0049 | 2.13 | 680 | 0.0095 |
70
+ | 0.0045 | 2.26 | 720 | 0.0094 |
71
+ | 0.0054 | 2.38 | 760 | 0.0084 |
72
+ | 0.0044 | 2.51 | 800 | 0.0083 |
73
+ | 0.0046 | 2.63 | 840 | 0.0080 |
74
+ | 0.004 | 2.76 | 880 | 0.0075 |
75
+ | 0.0035 | 2.88 | 920 | 0.0075 |
76
+ | 0.004 | 3.01 | 960 | 0.0075 |
77
+ | 0.0022 | 3.13 | 1000 | 0.0077 |
78
+ | 0.0018 | 3.26 | 1040 | 0.0079 |
79
+ | 0.0019 | 3.38 | 1080 | 0.0079 |
80
+ | 0.0017 | 3.51 | 1120 | 0.0079 |
81
+ | 0.0017 | 3.63 | 1160 | 0.0076 |
82
+ | 0.0018 | 3.76 | 1200 | 0.0076 |
83
 
84
 
85
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c71287642e1905144a4bba8f7e1d0b1e49b4c714dca7635f613f014466cc0862
3
  size 218196746
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:369293df0dff107d0a0f424857c8a032eefa82897eeaf2fc17b2c9ebb5255ca8
3
  size 218196746
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e4f4d19294ec7549481aa485a0e4eede6ef00924d7f93c41f9669a542b75064d
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6283284b17c6fb88060af486e87eefd1e3b700ab7ce8c4f24d8fbed73ccaa150
3
  size 4984