Steelskull
commited on
Commit
•
9467ed2
1
Parent(s):
56d91e6
Update README.md
Browse files
README.md
CHANGED
@@ -9,60 +9,104 @@ license: apache-2.0
|
|
9 |
|
10 |
# phi-2-DLEC
|
11 |
|
12 |
-
The DLEC (Distributive Layer Expansion Curve) methodology offers a novel approach to improving neural network models by focusing on the strategic duplication of certain effective layers.
|
13 |
-
Developed with the aim of enhancing model performance, DLEC carefully identifies and amplifies the impact of key layers within the model's architecture.
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
18 |
-
Setting Up: First, the script ensures all necessary components are in place, from libraries to the model and dataset.
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
|
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
-
Selective Layer Duplication: DLEC doesn't just add more layers; it doubles down on the ones that really matter. This methodical selection ensures we're making the most of the model's capabilities without wasteful expansion.
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
This approach is about making informed, strategic enhancements to model architecture, prioritizing efficiency and effectiveness in utilizing neural network capabilities.
|
34 |
|
35 |
-
|
36 |
-
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
- 20
|
48 |
-
- 23
|
49 |
-
- 24
|
50 |
-
- 27
|
51 |
-
- 28
|
52 |
-
- 31
|
53 |
-
- 32
|
54 |
```
|
55 |
-
Currently, I am still limited to Mergekit, for this method, which does not support single layer duping, this may have a impact on performance.
|
56 |
|
57 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
|
|
|
|
|
|
|
|
63 |
|
64 |
-
|
65 |
-
Using DLEC, I was able to increase Phi-2 from 2.78b -> 3.25b with less than or around a single point of loss.
|
66 |
|
67 |
This method is still in active development, and I am currently tweaking the algorithm to improve the layer selection process,
|
68 |
I am also working on a single layer duping script as merge kit does not currently support this and I am merging layers that are unneeded and its degrading performance.
|
|
|
9 |
|
10 |
# phi-2-DLEC
|
11 |
|
12 |
+
The DLEC (Distributive Layer Expansion Curve) methodology offers a novel approach to improving neural network models by focusing on the strategic duplication of certain effective layers. Developed with the aim of enhancing model performance, DLEC carefully identifies and amplifies the impact of key layers within the model's architecture.
|
|
|
13 |
|
14 |
+
## Code overview:
|
15 |
|
16 |
+
Setting Up:
|
|
|
17 |
|
18 |
+
First, the script ensures all necessary components are in place, from libraries to the model and dataset.
|
19 |
|
20 |
+
Database for Activations:
|
21 |
|
22 |
+
A SQLite database is established to track layer activations, providing a clear view into how
|
23 |
+
individual neurons react and which layers are most influential — these are our 'beneficial layers.'
|
24 |
|
25 |
+
Analyzing and Identifying:
|
26 |
|
27 |
+
By analyzing activation data, the script pinpoints which layers are most valuable to the model's performance.
|
|
|
28 |
|
29 |
+
Configuring DLEC:
|
30 |
+
|
31 |
+
A configuration is then created, guiding how the model should incorporate duplicates of these beneficial layers to boost effectiveness without unnecessarily increasing complexity.
|
32 |
+
|
33 |
+
# Key Features:
|
34 |
+
Selective Layer Duplication:
|
35 |
+
|
36 |
+
DLEC doesn't just add more layers; it doubles down on the ones that really matter. This methodical selection ensures we're making the most of the model's capabilities without wasteful expansion.
|
37 |
+
|
38 |
+
Smart Resource Management:
|
39 |
+
|
40 |
+
By honing in on specific areas for improvement, DLEC aims to make better use of computational and memory resources, promoting more efficient learning without adding undue complexity to the model.
|
41 |
|
42 |
This approach is about making informed, strategic enhancements to model architecture, prioritizing efficiency and effectiveness in utilizing neural network capabilities.
|
43 |
|
44 |
+
# Information Loss:
|
45 |
+
It is common to observe a loss of intelligence when merging models, especially with Passthrough merging, which typically results in a loss of around 3 points per billion parameters duplicated, assuming the merge is done correctly. If the merge is suboptimal, the loss can be much larger, ranging from 3-8 points or more per billion parameters duplicated. However, with DLEC, I was able to increase Phi-2 from 2.78b to 3.25b with a minimal loss of around 0.44 points on average.
|
46 |
+
|
47 |
+
DLEC Expanded Model:
|
48 |
+
[TheSkullery/phi-2-DLEC](https://huggingface.co/TheSkullery/phi-2-DLEC)
|
49 |
+
2.78 -> 3.25, a ~17% increase in size
|
50 |
+
```
|
51 |
+
Metric -> Value
|
52 |
+
Avg. 46.72
|
53 |
+
AGIEval 29.64
|
54 |
+
GPT4All 69.48
|
55 |
+
TruthfulQA 50.29
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
```
|
|
|
57 |
|
58 |
+
Original Model:
|
59 |
+
[abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super))
|
60 |
+
```
|
61 |
+
Metric -> Value
|
62 |
+
Avg. 47.16
|
63 |
+
AGIEval 31.95
|
64 |
+
GPT4All 70.81
|
65 |
+
TruthfulQA 48.39
|
66 |
+
```
|
67 |
+
|
68 |
+
Loss or Increase:
|
69 |
+
Avg. -0.44
|
70 |
+
AGIEval -2.31
|
71 |
+
GPT4All -1.33
|
72 |
+
TruthfulQA +1.90
|
73 |
+
|
74 |
+
|
75 |
+
Example of loss:
|
76 |
+
[Steelskull/Etheria-55b-v0.1](https://huggingface.co/Steelskull/Etheria-55b-v0.1)
|
77 |
+
```
|
78 |
+
Metric -> Value
|
79 |
+
Avg. 64.69
|
80 |
+
AI2 Reasoning Challenge 65.10
|
81 |
+
HellaSwag 81.93
|
82 |
+
MMLU 73.66
|
83 |
+
TruthfulQA 56.16
|
84 |
+
Winogrande 76.09
|
85 |
+
GSM8k 35.18
|
86 |
+
```
|
87 |
+
|
88 |
+
[Yi-34B-200K-DARE-megamerge-v8](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8)
|
89 |
+
```
|
90 |
+
Metric -> Value
|
91 |
+
Avg. 72.56
|
92 |
+
AI2 Reasoning Challenge 67.75
|
93 |
+
HellaSwag 86.06
|
94 |
+
MMLU 77.03
|
95 |
+
TruthfulQA 56.31
|
96 |
+
Winogrande 82.79
|
97 |
+
GSM8k 65.43
|
98 |
+
```
|
99 |
|
100 |
+
Merge Loss (Yi-34B-200K-DARE-megamerge-v8 compared to Etheria-55b-v0.1):
|
101 |
+
Avg. -7.87
|
102 |
+
AI2 Reasoning Challenge -2.65
|
103 |
+
HellaSwag -4.13
|
104 |
+
MMLU -3.37
|
105 |
+
TruthfulQA +0.15
|
106 |
+
Winogrande -6.70
|
107 |
+
GSM8k -30.25
|
108 |
|
109 |
+
In the example comparing Etheria-55b-v0.1 and Yi-34B-200K-DARE-megamerge-v8, there is a significant decrease in performance across all metrics, with the average score decreasing by 7.87 points. The most notable is in the GSM8k benchmark, where Yi-34B-200K-DARE-megamerge-v8 outperforms Etheria-55b-v0.1 by 30.25 points.
|
|
|
110 |
|
111 |
This method is still in active development, and I am currently tweaking the algorithm to improve the layer selection process,
|
112 |
I am also working on a single layer duping script as merge kit does not currently support this and I am merging layers that are unneeded and its degrading performance.
|