sometimesanotion commited on
Commit
34d172f
·
verified ·
1 Parent(s): 39236e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -10,7 +10,9 @@ tags:
10
  ---
11
  # merge
12
 
13
- The merits of multi-stage arcee_fusion merges are clearly shown in [sometimesanotion/Lamarck-14B-v0.7-Fusion](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion), which has a valuable uptick in GPQA over its predecessors. Will its gains be maintained with a modified version of the SLERP recipe from [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B)? Clearly, self-attention and perceptrons can unlock a lot of power in this kind of merge.
 
 
14
 
15
  ## Merge Details
16
  ### Merge Method
 
10
  ---
11
  # merge
12
 
13
+ The merits of multi-stage arcee_fusion merges are clearly shown in [sometimesanotion/Lamarck-14B-v0.7-Fusion](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion), which has a valuable uptick in GPQA over its predecessors. Will its gains be maintained with a modified version of the SLERP recipe from [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B)? Let's find out what these weights for self-attention and perceptrons can unlock in this merge.
14
+
15
+ Why isn't this the next version of Lamarck? It has not undergone the highly layer-targeting merges that go into a Lamarck release, and to truly refine Lamarck v0.7 requires top-notch components. This one, perhaps.
16
 
17
  ## Merge Details
18
  ### Merge Method