jeffmeloy commited on
Commit
ade6d47
1 Parent(s): d9212d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -3
README.md CHANGED
@@ -1,3 +1,122 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-7B
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - not-for-all-audiences
8
+ language:
9
+ - en
10
+ library_name: transformers
11
+ ---
12
+
13
+ ## Model Description
14
+
15
+ Model created by analyzing and selecting the optimal layers from other Qwen2.5-7B models based on their dimensional utilization efficiency, measured by the Normalized Effective Rank (NER). Computed like:
16
+
17
+ - Input: Weight matrix for each model layer
18
+ - Compute singular values σᵢ where σᵢ ≥ 0 # σᵢ represents the importance of each dimension
19
+ - Filter values above numerical threshold (>1e-12)
20
+ - Sum all singular values: S = Σσᵢ # S acts as normalization factor
21
+ - Create probability distribution: pᵢ = σᵢ/S # converts singular values to probabilities summing to 1
22
+ - Compute Shannon entropy: H = -Σ(pᵢ * log₂(pᵢ)) # measures information content
23
+ - Calculate maximum possible entropy: H_max = log₂(n)
24
+ - Final NER score = H/H_max # normalizes score to [0,1] range
25
+ - Results in value between 0 and 1 for each model layer
26
+
27
+ ## Creating Composite Model
28
+
29
+ Code here: https://huggingface.co/jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0/blob/main/ner_merge.py
30
+
31
+ Code functions:
32
+ - Download selected models from Hugging Face Hub
33
+ - Calculate Normalized Effective Rank (NER) for each layer within each model
34
+ - Define model and layer name pairs that have highest NER for each layer based on their NER scores
35
+ - Incrementally build a composite model using layer with highest NER from model pool
36
+ - Save merge reports documenting layer sources
37
+ - Copy config and tokenizer files from base model
38
+ - Save the composite model with complete weights # model ready to use
39
+
40
+ Configfile:
41
+
42
+ base_model: "Qwen/Qwen2.5-7B"
43
+
44
+ fine_tuned_models: # uncomment the models you want to merge
45
+
46
+ #- "Qwen/Qwen2.5-7B"
47
+
48
+ #- "Qwen/Qwen2.5-7B-Instruct"
49
+
50
+ #- "EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1"
51
+
52
+ #- "FourOhFour/Vapor_v2_7B"
53
+
54
+ #- "Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2"
55
+
56
+ #- "happzy2633/qwen2.5-7b-ins-v3"
57
+
58
+ #- "huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2"
59
+
60
+ #- "HumanLLMs/Humanish-Qwen2.5-7B-Instruct"
61
+
62
+ #- "Orion-zhen/Qwen2.5-7B-Instruct-Uncensored"
63
+
64
+ #- "Orion-zhen/Meissa-Qwen2.5-7B-Instruct"
65
+
66
+ #- "jeffmeloy/Qwen2.5-7B-nerd-uncensored-v0.9"
67
+
68
+ #- "jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0"
69
+
70
+ #- "jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.1"
71
+
72
+ #- "jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.2"
73
+
74
+ #- "AmberYifan/Qwen2.5-7B-dpo-2k"
75
+
76
+ #- "sethuiyer/Qwen2.5-7B-Anvita"
77
+
78
+ #- "rombodawg/Rombos-LLM-V2.5-Qwen-7b"
79
+
80
+ #- "Cran-May/T.E-8.1"
81
+
82
+ #- "beomi/Qwen2.5-7B-Instruct-kowiki-qa"
83
+
84
+ #- "Orion-zhen/Qwen2.5-7B-Gutenberg-KTO"
85
+
86
+ #- "fblgit/cybertron-v4-qw7B-MGS"
87
+
88
+ #- "nguyentd/FinancialAdvice-Qwen2.5-7B"
89
+
90
+ #- "WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B"
91
+
92
+ #- "edgerunner-ai/EdgeRunner-Command-Nested"
93
+
94
+ #- "katanemo/Arch-Function-7B"
95
+
96
+ #- "DeepGlint-AI/llava-mlcd-qwen2.5-7b"
97
+
98
+ #- "mergekit-community/mergekit-slerp-aflqaqy"
99
+
100
+ #- "mergekit-community/mergekit-ties-inxwsfo"
101
+
102
+ #- "Qwen/Qwen2.5-Coder-7B-Instruct"
103
+
104
+ #- "Qwen/Qwen2.5-Math-7B-Instruct"
105
+
106
+ #- "Qwen/Qwen2.5-Coder-7B"
107
+
108
+ #- "Qwen/Qwen2.5-Math-7B"
109
+
110
+ #- "thomas-yanxin/XinYuan-Qwen2.5-7B-0917"
111
+
112
+ #- "jbjeong91/Qwen2.5_7B_IST_StoryGen_vanilla"
113
+
114
+ #- "AmberYifan/Qwen2.5-7B-dpo-2k-hhrlhf"
115
+
116
+ #- "jbjeong91/Qwen2.5_7B_IST_StoryGen_test2"
117
+
118
+ models_dir: "./input_models/"
119
+
120
+ output_dir: "./merged_model/"
121
+
122
+ metric_dir: "./metrics/"