End of training
Browse files
README.md
CHANGED
@@ -15,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
# phi-2-finetuned-labeledsbc
|
16 |
|
17 |
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
|
18 |
-
It achieves the following results on the evaluation set:
|
19 |
-
- Loss: 0.4040
|
20 |
|
21 |
## Model description
|
22 |
|
@@ -37,7 +35,7 @@ More information needed
|
|
37 |
The following hyperparameters were used during training:
|
38 |
- learning_rate: 0.0002
|
39 |
- train_batch_size: 6
|
40 |
-
- eval_batch_size:
|
41 |
- seed: 42
|
42 |
- gradient_accumulation_steps: 4
|
43 |
- total_train_batch_size: 24
|
@@ -49,58 +47,6 @@ The following hyperparameters were used during training:
|
|
49 |
|
50 |
### Training results
|
51 |
|
52 |
-
| Training Loss | Epoch | Step | Validation Loss |
|
53 |
-
|:-------------:|:-----:|:----:|:---------------:|
|
54 |
-
| 3.4053 | 1.0 | 6 | 3.1734 |
|
55 |
-
| 2.7973 | 2.0 | 12 | 2.3825 |
|
56 |
-
| 2.0487 | 3.0 | 18 | 1.8269 |
|
57 |
-
| 1.6066 | 4.0 | 24 | 1.5462 |
|
58 |
-
| 1.3941 | 5.0 | 30 | 1.3882 |
|
59 |
-
| 1.2595 | 6.0 | 36 | 1.2367 |
|
60 |
-
| 1.1104 | 7.0 | 42 | 1.1045 |
|
61 |
-
| 0.988 | 8.0 | 48 | 1.0010 |
|
62 |
-
| 0.8834 | 9.0 | 54 | 0.9122 |
|
63 |
-
| 0.7908 | 10.0 | 60 | 0.8344 |
|
64 |
-
| 0.7069 | 11.0 | 66 | 0.7794 |
|
65 |
-
| 0.6332 | 12.0 | 72 | 0.7276 |
|
66 |
-
| 0.5627 | 13.0 | 78 | 0.6681 |
|
67 |
-
| 0.5072 | 14.0 | 84 | 0.6246 |
|
68 |
-
| 0.4579 | 15.0 | 90 | 0.5930 |
|
69 |
-
| 0.4278 | 16.0 | 96 | 0.5549 |
|
70 |
-
| 0.3945 | 17.0 | 102 | 0.5336 |
|
71 |
-
| 0.3596 | 18.0 | 108 | 0.5108 |
|
72 |
-
| 0.3248 | 19.0 | 114 | 0.4835 |
|
73 |
-
| 0.3066 | 20.0 | 120 | 0.4727 |
|
74 |
-
| 0.2909 | 21.0 | 126 | 0.4453 |
|
75 |
-
| 0.2661 | 22.0 | 132 | 0.4450 |
|
76 |
-
| 0.2521 | 23.0 | 138 | 0.4278 |
|
77 |
-
| 0.2474 | 24.0 | 144 | 0.4186 |
|
78 |
-
| 0.2439 | 25.0 | 150 | 0.4247 |
|
79 |
-
| 0.2377 | 26.0 | 156 | 0.4125 |
|
80 |
-
| 0.2271 | 27.0 | 162 | 0.4085 |
|
81 |
-
| 0.2144 | 28.0 | 168 | 0.4112 |
|
82 |
-
| 0.2085 | 29.0 | 174 | 0.4065 |
|
83 |
-
| 0.2118 | 30.0 | 180 | 0.4111 |
|
84 |
-
| 0.2088 | 31.0 | 186 | 0.4080 |
|
85 |
-
| 0.1983 | 32.0 | 192 | 0.4068 |
|
86 |
-
| 0.1966 | 33.0 | 198 | 0.4018 |
|
87 |
-
| 0.1921 | 34.0 | 204 | 0.4007 |
|
88 |
-
| 0.1928 | 35.0 | 210 | 0.3933 |
|
89 |
-
| 0.1893 | 36.0 | 216 | 0.3919 |
|
90 |
-
| 0.185 | 37.0 | 222 | 0.3996 |
|
91 |
-
| 0.1762 | 38.0 | 228 | 0.4016 |
|
92 |
-
| 0.1812 | 39.0 | 234 | 0.4052 |
|
93 |
-
| 0.1785 | 40.0 | 240 | 0.4008 |
|
94 |
-
| 0.173 | 41.0 | 246 | 0.4009 |
|
95 |
-
| 0.1748 | 42.0 | 252 | 0.4010 |
|
96 |
-
| 0.1745 | 43.0 | 258 | 0.4023 |
|
97 |
-
| 0.1765 | 44.0 | 264 | 0.4003 |
|
98 |
-
| 0.1748 | 45.0 | 270 | 0.4014 |
|
99 |
-
| 0.1809 | 46.0 | 276 | 0.4020 |
|
100 |
-
| 0.1698 | 47.0 | 282 | 0.4028 |
|
101 |
-
| 0.1691 | 48.0 | 288 | 0.4037 |
|
102 |
-
| 0.174 | 49.0 | 294 | 0.4040 |
|
103 |
-
| 0.1656 | 50.0 | 300 | 0.4040 |
|
104 |
|
105 |
|
106 |
### Framework versions
|
|
|
15 |
# phi-2-finetuned-labeledsbc
|
16 |
|
17 |
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
|
|
|
|
|
18 |
|
19 |
## Model description
|
20 |
|
|
|
35 |
The following hyperparameters were used during training:
|
36 |
- learning_rate: 0.0002
|
37 |
- train_batch_size: 6
|
38 |
+
- eval_batch_size: 8
|
39 |
- seed: 42
|
40 |
- gradient_accumulation_steps: 4
|
41 |
- total_train_batch_size: 24
|
|
|
47 |
|
48 |
### Training results
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
|
52 |
### Framework versions
|