Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,8 @@ Same idea as Lambent/qwen2.5-14B-selfmerge-A, but training the base model on an
|
|
18 |
|
19 |
Hope is the lightweight instruction tuning might add some synergy with the original instruct.
|
20 |
|
|
|
|
|
21 |
Subsets of mrfakename/Capybara-ShareGPT, abacusai/SystemChat-1.1, anthracite-org/nopm_claude_writing_fixed and fineweb-edu were used for the alternate training.
|
22 |
|
23 |
### Merge Method
|
|
|
18 |
|
19 |
Hope is the lightweight instruction tuning might add some synergy with the original instruct.
|
20 |
|
21 |
+
Testing: eq-bench showed no syntax errors and result was 75.6984, closer to original instruct value of 76.9195 than selfmerge-A (which had 73.8068).
|
22 |
+
|
23 |
Subsets of mrfakename/Capybara-ShareGPT, abacusai/SystemChat-1.1, anthracite-org/nopm_claude_writing_fixed and fineweb-edu were used for the alternate training.
|
24 |
|
25 |
### Merge Method
|