sophosympatheia
commited on
Commit
•
cde0949
1
Parent(s):
e40e14c
Update README.md
Browse filesUpdating reference to wizard-tulu-dolphin model since I released it
README.md
CHANGED
@@ -11,7 +11,7 @@ language:
|
|
11 |
|
12 |
This version of Midnight Rose has a complex family tree but I'll do my best to describe it. I will include mergekit yml files below.
|
13 |
* midnight-rose-70b-v2.0.1 (Component 1, unreleased): A DARE TIES merge of midnight-rose-70b-v1.0 and an unreleased midnight-rose-70b-v1.4 that used the same underlying models but with different weights, and it had different LoRAs applied to it.
|
14 |
-
* wizard-tulu-dolphin-70b-v1.0 (Component 2
|
15 |
* Finally, I SLERP merged Component 1 and Component 2 above to produce this model.
|
16 |
|
17 |
What I like about this version of Midnight Rose is it picked up some spicyness from Component 1 and some smarts from Component 2.
|
|
|
11 |
|
12 |
This version of Midnight Rose has a complex family tree but I'll do my best to describe it. I will include mergekit yml files below.
|
13 |
* midnight-rose-70b-v2.0.1 (Component 1, unreleased): A DARE TIES merge of midnight-rose-70b-v1.0 and an unreleased midnight-rose-70b-v1.4 that used the same underlying models but with different weights, and it had different LoRAs applied to it.
|
14 |
+
* [wizard-tulu-dolphin-70b-v1.0](https://huggingface.co/sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0) (Component 2): This model was the result of a DARE TIES merge between [WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) and [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), which I then SLERP merged with a modified version of [dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b).
|
15 |
* Finally, I SLERP merged Component 1 and Component 2 above to produce this model.
|
16 |
|
17 |
What I like about this version of Midnight Rose is it picked up some spicyness from Component 1 and some smarts from Component 2.
|